Figure 6 - uploaded by Sophie Rosset
Content may be subject to copyright.
Variation of ETER depending on the componentroot entity importance ratio α. 

Variation of ETER depending on the componentroot entity importance ratio α. 

Source publication
Article
Full-text available
This paper addresses the question of hierarchical named entity evaluation. In particular, we focus on metrics to deal with complex named entity structures as those introduced within the QUAERO project. The intended goal is to propose a smart way of evaluating partially correctly detected complex entities, beyond the scope of traditional metrics. No...

Contexts in source publication

Context 1
... default α value of 0.5 is the most natural to evaluate the global task, but changing its value allows to try to un- derstand the strengths and weaknesses of the systems. Fig- ure 6 illustrates the performance measurement variation of the three NER systems in function of α. ...
Context 2
... an equal weight to entity classification and entity decomposition; • α = 1 : evaluation of performances in entity detection and decomposition, classification is not taked into account. As we can see figure 6, for α = 0 the best score is reached by NER-8. It means that it has the best performance in en- tity detection and classification. ...

Similar publications

Preprint
Full-text available
A T-graph (a special case of a chordal graph) is the intersection graph of connected subtrees of a suitable subdivision of a fixed tree T . We deal with the isomorphism problem for T-graphs which is GI-complete in general - when T is a part of the input and even a star. We prove that the T-graph isomorphism problem is in FPT when T is the fixed par...

Citations

... Because of its importance, the evaluation of NER has been an active field of research [4]. In literature, most of the evaluation of NER research focused on comparative evaluation of performance of commonly used NER systems [5][6][7], proposing new evaluation metrics [8][9][10][11]. However, the inconsistencies in NER evaluations, preventing objective cross-system comparisons, are underexplored. ...
... Only a handful of studies have conducted comparative evaluation of the recognition and classification performance of commonly used NER systems [5][6][7]. A few studies proposed new evaluation metrics to more precisely appraise NER performance [8][9][10][11], or compared the existent evaluation strategies [2]. Although most of the evaluations adopt the standard quantitative metrics such as precision and recall, the processes of computing such metrics vary significantly and deep understanding of NER errors and their root causes is still lacking [6]. ...
Article
Full-text available
Human annotations are the established gold standard for evaluating natural language processing (NLP) methods. The goals of this study are to quantify and qualify the disagreement between human and NLP. We developed an NLP system for annotating clinical trial eligibility criteria text and constructed a manually annotated corpus, both following the OMOP Common Data Model (CDM). We analyzed the discrepancies between the human and NLP annotations and their causes (e.g., ambiguities in concept categorization and tacit decisions on inclusion of qualifiers and temporal attributes during concept annotation). This study initially reported complexities in clinical trial eligibility criteria text that complicate NLP and the limitations of the OMOP CDM. The disagreement between and human and NLP annotations may be generalizable. We discuss implications for NLP evaluation.
... To check its performance, we applied this model on the ETAPE 2 test data [15] which follows the same NE schema. Our model obtains an Entity Tree Error Rate (ETER) [22] of 41.7% which would rank this system at the 6th position on 11 systems given the results of the evaluation campaign. ...
... That schema has been used in two evaluations, in 2010 within Quaero [20] and in 2014 with the open campaign ETAPE [21]. We use the ETER (Entities Tree Error Rate) [22] to evaluate the task in both manual and automatic transcription conditions: ...
Chapter
NER is an important task in NLP, often used as a basis for further treatments. A new challenge has emerged in the last few years: structured named entity recognition, where not only named entities must be identified but also their hierarchical components. In this article, we describe a cascading CRFs approach to address this challenge. It reaches the state of the art while remaining very simple on a structured NER challenge. We then offer an error analysis of our system based on a detailed, yet simple, error classification.