×

Wide-coverage efficient statistical parsing with CCG and log-linear models. (English) Zbl 1234.68402

Summary: This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are “full” parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is combinatory categorial grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a maximum entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG’s “spurious ambiguity”, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG’s nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.

MSC:

68T50 Natural language processing
68Q42 Grammars and rewriting systems
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Abney Steven, Computational Linguistics 23 (4) pp 597– (1997) · Zbl 1128.68106
[2] Bangalore Srinivas, Computational Linguistics 25 (2) pp 237– (1999)
[3] DOI: 10.2307/410452 · doi:10.2307/410452
[4] DOI: 10.1162/089120103322753356 · Zbl 1234.68403 · doi:10.1162/089120103322753356
[5] DOI: 10.1162/0891201053630273 · Zbl 1234.68404 · doi:10.1162/0891201053630273
[6] DOI: 10.1214/aoms/1177692379 · Zbl 0251.62020 · doi:10.1214/aoms/1177692379
[7] DOI: 10.1109/34.588021 · Zbl 05111233 · doi:10.1109/34.588021
[8] DOI: 10.1016/0167-8191(96)00024-5 · Zbl 0875.68206 · doi:10.1016/0167-8191(96)00024-5
[9] Johnson Mark, Computational Linguistics 24 (4) pp 613– (1998)
[10] DOI: 10.1016/0885-2308(90)90022-X · doi:10.1016/0885-2308(90)90022-X
[11] Marcus Mitchell, Computational Linguistics 19 (2) pp 313– (1993)
[12] DOI: 10.1162/0891201053630264 · Zbl 06016144 · doi:10.1162/0891201053630264
[13] Prins Robbert, Traitement Automatique des Langues 44 (3) pp 121– (2003)
[14] DOI: 10.1023/A:1007502103375 · Zbl 0917.68170 · doi:10.1023/A:1007502103375
[15] DOI: 10.1016/S0019-9958(67)80007-X · Zbl 0149.24803 · doi:10.1016/S0019-9958(67)80007-X
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.