8 thoughts on “Petter Haugereid (Western Norway Univ. of Applied Sciences): An incremental approach to verb clusters in German”
Dear Petter, Interesting talk. Thanks for recording it. I have two remarks. On slide 12 you criticize my approach for requiring a “search function”. What I use is “append” this is a standard and very simple constraint that is used in most HPSG analyses. Some even use shuffle, which is even more complex. You mention that append is not available in the LKB system. You are mixing levels here. That a certain implementation system cannot do things is not a theoretical argument. If anything at all, it is an argument against the system.
Second, your argument on Slide 13 is misguided, since nobody assumes that the verb of an SOV language has to be seen before any hypothesis can be made about the structure of the sentence. There is a large body of literature discussing this issue. There were analyses before yours assuming branchings similar to yours (Hauser, Steedman, …). All this is discussed in my Grammar Theory textbook in Section 15.2 about incremental processing.
What you try to model in your system is the processing of the sentence but it is questionable whether people use push-pop rules in the way you suggest. I think it is better to state knowledge about relations between linguistic objects in a processing-neutral way as is usually done in HPSG and pair this with performance models.
There is an interesting contribution by Tom Wasow on psycholoinguistics and HPSG in the forthcoming HPSG handbook. A prepublished version of it is already online: https://hpsg.hu-berlin.de/Projects/HPSG-handbook/
I agree with Stefan that the implementation and the analysis are separate levels; I think, however, that the nature of implementation ultimately contributes to a theoretical argument. For example, the speed of the parser may potentially relate to a psycholinguistic plausibility discussion, or the number of operations that need to be natively defined may be looked at from a similar angle. And then there is also the important issue of testing theoretical claims (how easy is it to implement the analysis in order to test it). Of course, the LKB is not the only system out there which allows that 😉 but the way I understood Petter is, he is looking for an analysis that is readily testable in his framework (sounds reasonable to me).
Hi Stefan!
Thank you very much for the comments! I have tried to write some clarifications below:
First, it is possible that “search” function is not a formally appropriate term to use for what is happening with the SUBCAT list of the head daughter in the head argument phrase, but inutitvely this is what is going on. The parser needs to look at each of the elements on the SUBCAT list of the head daughter and match it with the non-head daughter:
The concatenation of lists in the mother, however, is trivial.
By using append to tear apart a list in order to find an element, rather than just putting two lists together, the formalism becomes more complex, and I cannot see how this is a good thing.
It may be that efficiency of a parser has little to do with linguistic theory, but given two equally performing systems where one is more efficient than the other, I would tend to believe that the more efficient option is better.
Second, I guess when you write that “it is better to state knowledge about relations between linguistic objects in a processing-neutral way” you mean neutral with regard to the parser. It is not so that the LKB system forces anyone to write a left-branching grammar. (As far as I know, my grammar is the only left-branching LKB grammar.) I quess we perceive of the notion of incremental processing in different ways. I take it more literally, by assuming that words are attached one by one, and that a constituent structure and a semantic structure is built incrementally. I do not think you should put too much emphasis on the stacking and popping mechanism, and rather think of it as a way to navigate the AVM that is being built. We both believe in non-modular approaches, and the resulting constituent structures and semantic structures are not necessarily so different. What I guess I can see as a problem with my approach after reading quickly through your section on incremental processing in your Grammar Theory text book, is that humans often do not commit to an analysis right away. This possibility to underspecify is still lacking in my analysis. (But this would hold for any implemetation I know of.)
Hi Petter, great that you contribute to the conference!
I have the impression that what you are proposing is like an additional level of representation. Like we had the tectogrammatical structure and the phenogrammatical structure, your structure would be the “incremental structure”. If this is the case, are there empirical phenomena to show that the incremental structure constraints, say, the tectogrammatical structure (or the other way around)? or are there phenomena that we can only /best model by looking at the incremental structure? Would these all be processing-related phenomena? or, maybe, include grammatical phenomena such as the restrictions on the syntactic complexity of prenominal modifiers in English?
Hi Manfred, thanks for the question, and sorry about the late reply!
In addition to the incremental nature of the left-branching parse trees, a motivation behind the approach is the fact that verbs and complements in some languages reflect whether they are on the extraction path. In my approach, verbs and complementizers have local access to the extraction path, so the reflection of the extraction path can easlily be accounted for. However, in a regular HPSG grammer, this becomes a challenge , especially with regards to extracted adjuncts. See Chapter 6.9 in my thesis: https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/243990
Very interesting. I hope you watch our talk with Guy; we use a PLACEHOLDER feature that serves a somewhat similar purpose, I think, as your STACK. Furthermore, Guy has been discussing an alternative mechanism with me in the context of the same Russian grammar, which would involve, potentially, PUSH and POP. Would be cool to cross-test our analyses (ours on German and yours on Russian)!
Dear Petter, Interesting talk. Thanks for recording it. I have two remarks. On slide 12 you criticize my approach for requiring a “search function”. What I use is “append” this is a standard and very simple constraint that is used in most HPSG analyses. Some even use shuffle, which is even more complex. You mention that append is not available in the LKB system. You are mixing levels here. That a certain implementation system cannot do things is not a theoretical argument. If anything at all, it is an argument against the system.
Second, your argument on Slide 13 is misguided, since nobody assumes that the verb of an SOV language has to be seen before any hypothesis can be made about the structure of the sentence. There is a large body of literature discussing this issue. There were analyses before yours assuming branchings similar to yours (Hauser, Steedman, …). All this is discussed in my Grammar Theory textbook in Section 15.2 about incremental processing.
http://langsci-press.org/catalog/book/255
What you try to model in your system is the processing of the sentence but it is questionable whether people use push-pop rules in the way you suggest. I think it is better to state knowledge about relations between linguistic objects in a processing-neutral way as is usually done in HPSG and pair this with performance models.
There is an interesting contribution by Tom Wasow on psycholoinguistics and HPSG in the forthcoming HPSG handbook. A prepublished version of it is already online:
https://hpsg.hu-berlin.de/Projects/HPSG-handbook/
Best
Stefan
I agree with Stefan that the implementation and the analysis are separate levels; I think, however, that the nature of implementation ultimately contributes to a theoretical argument. For example, the speed of the parser may potentially relate to a psycholinguistic plausibility discussion, or the number of operations that need to be natively defined may be looked at from a similar angle. And then there is also the important issue of testing theoretical claims (how easy is it to implement the analysis in order to test it). Of course, the LKB is not the only system out there which allows that 😉 but the way I understood Petter is, he is looking for an analysis that is readily testable in his framework (sounds reasonable to me).
Hi Stefan!
Thank you very much for the comments! I have tried to write some clarifications below:
First, it is possible that “search” function is not a formally appropriate term to use for what is happening with the SUBCAT list of the head daughter in the head argument phrase, but inutitvely this is what is going on. The parser needs to look at each of the elements on the SUBCAT list of the head daughter and match it with the non-head daughter:
head-argument-phrase ⇒
[ SUBCAT [1]⊕ [3]
HEAD-DTR [ SUBCAT [1] ⊕ ⊕ [3] ]
NON-HEAD-DTRS ]
The concatenation of lists in the mother, however, is trivial.
By using append to tear apart a list in order to find an element, rather than just putting two lists together, the formalism becomes more complex, and I cannot see how this is a good thing.
It may be that efficiency of a parser has little to do with linguistic theory, but given two equally performing systems where one is more efficient than the other, I would tend to believe that the more efficient option is better.
Second, I guess when you write that “it is better to state knowledge about relations between linguistic objects in a processing-neutral way” you mean neutral with regard to the parser. It is not so that the LKB system forces anyone to write a left-branching grammar. (As far as I know, my grammar is the only left-branching LKB grammar.) I quess we perceive of the notion of incremental processing in different ways. I take it more literally, by assuming that words are attached one by one, and that a constituent structure and a semantic structure is built incrementally. I do not think you should put too much emphasis on the stacking and popping mechanism, and rather think of it as a way to navigate the AVM that is being built. We both believe in non-modular approaches, and the resulting constituent structures and semantic structures are not necessarily so different. What I guess I can see as a problem with my approach after reading quickly through your section on incremental processing in your Grammar Theory text book, is that humans often do not commit to an analysis right away. This possibility to underspecify is still lacking in my analysis. (But this would hold for any implemetation I know of.)
Hi Petter, great that you contribute to the conference!
I have the impression that what you are proposing is like an additional level of representation. Like we had the tectogrammatical structure and the phenogrammatical structure, your structure would be the “incremental structure”. If this is the case, are there empirical phenomena to show that the incremental structure constraints, say, the tectogrammatical structure (or the other way around)? or are there phenomena that we can only /best model by looking at the incremental structure? Would these all be processing-related phenomena? or, maybe, include grammatical phenomena such as the restrictions on the syntactic complexity of prenominal modifiers in English?
Hi Manfred, thanks for the question, and sorry about the late reply!
In addition to the incremental nature of the left-branching parse trees, a motivation behind the approach is the fact that verbs and complements in some languages reflect whether they are on the extraction path. In my approach, verbs and complementizers have local access to the extraction path, so the reflection of the extraction path can easlily be accounted for. However, in a regular HPSG grammer, this becomes a challenge , especially with regards to extracted adjuncts. See Chapter 6.9 in my thesis: https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/243990
???
Petter, we missed you at the virtual DELPH-IN!
Very interesting. I hope you watch our talk with Guy; we use a PLACEHOLDER feature that serves a somewhat similar purpose, I think, as your STACK. Furthermore, Guy has been discussing an alternative mechanism with me in the context of the same Russian grammar, which would involve, potentially, PUSH and POP. Would be cool to cross-test our analyses (ours on German and yours on Russian)!
Yes, hopefully we get the opportunity to meet and compare analyses soon!