Classification results for each set of language representations.

Classification results for each set of language representations.

Source publication
Preprint
Full-text available
To what extent can neural network models learn generalizations about language structure, and how do we find out what they have learned? We explore these questions by training neural models for a range of natural language processing tasks on a massively multilingual dataset of Bible translations in 1295 languages. The learned language representation...

Contexts in source publication

Context 1
... In effect, the object/verb order labels we used for training were treated as noisy affix position labels, and the resulting classifier becomes much better at predicting affix position than object/verb order. An even clearer illustration of this can be found for the order of adposition and noun (see Figure 2), reflecting Greenberg's universal 27 (Greenberg, 1963) on the cross-linguistic association of prepositions with prefixing morphology, and postpositions with suffixing. ...
Context 2
... Order of adposition and noun (prepositions/postpositions), for the WordLM representations ( Figure 2). ...
Context 3
... same pattern is present for another feature, order of adposition and noun (Fig- ure 2), with confusion matrices in Table 2. The mean F 1 with respect to projected labels is nearly identical with URIEL-trained classifiers (0.887) as with classifiers trained on projected labels (0.869). ...