Classification results for each set of language representations.

Classification results for each set of language representations.

Source publication
Preprint
Full-text available
To what extent can neural network models learn generalizations about language structure, and how do we find out what they have learned? We explore these questions by training neural models for a range of natural language processing tasks on a massively multilingual dataset of Bible translations in 1295 languages. The learned language representation...

Contexts in source publication

Context 1
... Order of numeral and noun, for the WordLM representations (Figure 3). Note that no representations obtained a mean F 1 above 0.7 when trained on URIEL data. ...
Context 2
... somewhat different result is shown in Figure 3 and Table 2 for the order of numeral and noun. Here, the mean F 1 is considerably higher (0.763) when trained on projected labels than on URIEL labels (0.684), where both figures are evaluated with respect to URIEL labels. ...