Overview of the CLEF eHealth evaluation lab 2015

Lorraine Goeuriot, Liadh Kelly, Hanna Suominen, Leif Hanlen, Aurèlie Nèvèol, Cyril Grouin, João Palotti, Guido Zuccon

Research output: A Conference proceeding or a Chapter in BookConference contribution

47 Citations (Scopus)

Abstract

This paper reports on the 3rd CLEFeHealth evaluation lab, which continues our evaluation resource building activities for the medical domain. In this edition of the lab, we focus on easing patients and nurses in authoring, understanding, and accessing eHealth information. The 2015 CLEFeHealth evaluation lab was structured into two tasks, focusing on evaluating methods for information extraction (IE) and information retrieval (IR). The IE task introduced two new challenges. Task 1a focused on clinical speech recognition of nursing handover notes; Task 1b focused on clinical named entity recognition in languages other than English, specifically French. Task 2 focused on the retrieval of health information to answer queries issued by general consumers seeking information to understand their health symptoms or conditions. The number of teams registering their interest was 47 in Tasks 1 (2 teams in Task 1a and 7 teams in Task 1b) and 53 in Task 2 (12 teams) for a total of 20 unique teams. The best system recognized 4, 984 out of 6, 818 test words correctly and generated 2, 626 incorrect words (i.e., 38.5% error) in Task 1a; had the F-measure of 0.756 for plain entity recognition, 0.711 for normalized entity recognition, and 0.872 for entity normalization in Task 1b; and resulted in P@10 of 0.5394 and nDCG@10 of 0.5086 in Task 2. These results demonstrate the substantial community interest and capabilities of these systems in addressing challenges faced by patients and nurses. As in previous years, the organizers have made data and tools available for future research and development.

Original languageEnglish
Title of host publicationExperimental IR Meets Multilinguality, Multimodality, and Interaction
Subtitle of host publication6th International Conference of the CLEF Association, CLEF 2015, Proceedings
EditorsJosiane Mothe, Jacques Savoy, Jaap Kamps, Karen Pinel-Sauvagnat, Gareth Jones, Eric SanJuan, Linda Cappellato, Nicola Ferro
Place of PublicationCham, Switzerland
PublisherSpringer
Pages429-443
Number of pages15
Volume9283
ISBN (Electronic)9783319240275
ISBN (Print)9783319240268
DOIs
Publication statusPublished - 2015
Event6th International Conference on Labs of the Evaluation Forum, CLEF 2015 - Toulouse, Toulouse, France
Duration: 8 Sep 201511 Sep 2015
http://clef2015.clef-initiative.eu/publications.php

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9283
ISSN (Print)03029743
ISSN (Electronic)16113349

Conference

Conference6th International Conference on Labs of the Evaluation Forum, CLEF 2015
Abbreviated titleCLEF 2015
CountryFrance
CityToulouse
Period8/09/1511/09/15
OtherCLEF 2015 is the sixth CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks.

Building on the format first introduced in 2010, CLEF 2015 consists of an independent peer-reviewed conference on a broad range of issues in the fields of multilingual and multimodal information access evaluation, and a set of labs and workshops designed to test different aspects of mono and cross-language Information retrieval systems. Together, the conference and the lab series will maintain and expand upon the CLEF tradition of community-based evaluation and discussion on evaluation issues
Internet address

Fingerprint

Health
Nursing
Evaluation
Information retrieval
Speech recognition
Information Extraction
Named Entity Recognition
Authoring
Handover
Speech Recognition
Research and Development
Information Retrieval
Normalization
Retrieval
Continue
Query
Resources
Demonstrate

Cite this

Goeuriot, L., Kelly, L., Suominen, H., Hanlen, L., Nèvèol, A., Grouin, C., ... Zuccon, G. (2015). Overview of the CLEF eHealth evaluation lab 2015. In J. Mothe, J. Savoy, J. Kamps, K. Pinel-Sauvagnat, G. Jones, E. SanJuan, L. Cappellato, ... N. Ferro (Eds.), Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF 2015, Proceedings (Vol. 9283, pp. 429-443). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9283). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-24027-5_44
Goeuriot, Lorraine ; Kelly, Liadh ; Suominen, Hanna ; Hanlen, Leif ; Nèvèol, Aurèlie ; Grouin, Cyril ; Palotti, João ; Zuccon, Guido. / Overview of the CLEF eHealth evaluation lab 2015. Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF 2015, Proceedings. editor / Josiane Mothe ; Jacques Savoy ; Jaap Kamps ; Karen Pinel-Sauvagnat ; Gareth Jones ; Eric SanJuan ; Linda Cappellato ; Nicola Ferro. Vol. 9283 Cham, Switzerland : Springer, 2015. pp. 429-443 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{6aa8542b0d1543d4800a994e72bc84bc,
title = "Overview of the CLEF eHealth evaluation lab 2015",
abstract = "This paper reports on the 3rd CLEFeHealth evaluation lab, which continues our evaluation resource building activities for the medical domain. In this edition of the lab, we focus on easing patients and nurses in authoring, understanding, and accessing eHealth information. The 2015 CLEFeHealth evaluation lab was structured into two tasks, focusing on evaluating methods for information extraction (IE) and information retrieval (IR). The IE task introduced two new challenges. Task 1a focused on clinical speech recognition of nursing handover notes; Task 1b focused on clinical named entity recognition in languages other than English, specifically French. Task 2 focused on the retrieval of health information to answer queries issued by general consumers seeking information to understand their health symptoms or conditions. The number of teams registering their interest was 47 in Tasks 1 (2 teams in Task 1a and 7 teams in Task 1b) and 53 in Task 2 (12 teams) for a total of 20 unique teams. The best system recognized 4, 984 out of 6, 818 test words correctly and generated 2, 626 incorrect words (i.e., 38.5{\%} error) in Task 1a; had the F-measure of 0.756 for plain entity recognition, 0.711 for normalized entity recognition, and 0.872 for entity normalization in Task 1b; and resulted in P@10 of 0.5394 and nDCG@10 of 0.5086 in Task 2. These results demonstrate the substantial community interest and capabilities of these systems in addressing challenges faced by patients and nurses. As in previous years, the organizers have made data and tools available for future research and development.",
keywords = "Evaluation, Information extraction, Information retrieval, Medical informatics, Nursing records, Patient handoff/handover, Self-diagnosis, Speech recognition, Test-set generation, Text classification, Text segmentation",
author = "Lorraine Goeuriot and Liadh Kelly and Hanna Suominen and Leif Hanlen and Aur{\`e}lie N{\`e}v{\`e}ol and Cyril Grouin and Jo{\~a}o Palotti and Guido Zuccon",
year = "2015",
doi = "10.1007/978-3-319-24027-5_44",
language = "English",
isbn = "9783319240268",
volume = "9283",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "429--443",
editor = "Josiane Mothe and Jacques Savoy and Jaap Kamps and Karen Pinel-Sauvagnat and Gareth Jones and Eric SanJuan and Linda Cappellato and Nicola Ferro",
booktitle = "Experimental IR Meets Multilinguality, Multimodality, and Interaction",
address = "Netherlands",

}

Goeuriot, L, Kelly, L, Suominen, H, Hanlen, L, Nèvèol, A, Grouin, C, Palotti, J & Zuccon, G 2015, Overview of the CLEF eHealth evaluation lab 2015. in J Mothe, J Savoy, J Kamps, K Pinel-Sauvagnat, G Jones, E SanJuan, L Cappellato & N Ferro (eds), Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF 2015, Proceedings. vol. 9283, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9283, Springer, Cham, Switzerland, pp. 429-443, 6th International Conference on Labs of the Evaluation Forum, CLEF 2015, Toulouse, France, 8/09/15. https://doi.org/10.1007/978-3-319-24027-5_44

Overview of the CLEF eHealth evaluation lab 2015. / Goeuriot, Lorraine; Kelly, Liadh; Suominen, Hanna; Hanlen, Leif; Nèvèol, Aurèlie; Grouin, Cyril; Palotti, João; Zuccon, Guido.

Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF 2015, Proceedings. ed. / Josiane Mothe; Jacques Savoy; Jaap Kamps; Karen Pinel-Sauvagnat; Gareth Jones; Eric SanJuan; Linda Cappellato; Nicola Ferro. Vol. 9283 Cham, Switzerland : Springer, 2015. p. 429-443 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9283).

Research output: A Conference proceeding or a Chapter in BookConference contribution

TY - GEN

T1 - Overview of the CLEF eHealth evaluation lab 2015

AU - Goeuriot, Lorraine

AU - Kelly, Liadh

AU - Suominen, Hanna

AU - Hanlen, Leif

AU - Nèvèol, Aurèlie

AU - Grouin, Cyril

AU - Palotti, João

AU - Zuccon, Guido

PY - 2015

Y1 - 2015

N2 - This paper reports on the 3rd CLEFeHealth evaluation lab, which continues our evaluation resource building activities for the medical domain. In this edition of the lab, we focus on easing patients and nurses in authoring, understanding, and accessing eHealth information. The 2015 CLEFeHealth evaluation lab was structured into two tasks, focusing on evaluating methods for information extraction (IE) and information retrieval (IR). The IE task introduced two new challenges. Task 1a focused on clinical speech recognition of nursing handover notes; Task 1b focused on clinical named entity recognition in languages other than English, specifically French. Task 2 focused on the retrieval of health information to answer queries issued by general consumers seeking information to understand their health symptoms or conditions. The number of teams registering their interest was 47 in Tasks 1 (2 teams in Task 1a and 7 teams in Task 1b) and 53 in Task 2 (12 teams) for a total of 20 unique teams. The best system recognized 4, 984 out of 6, 818 test words correctly and generated 2, 626 incorrect words (i.e., 38.5% error) in Task 1a; had the F-measure of 0.756 for plain entity recognition, 0.711 for normalized entity recognition, and 0.872 for entity normalization in Task 1b; and resulted in P@10 of 0.5394 and nDCG@10 of 0.5086 in Task 2. These results demonstrate the substantial community interest and capabilities of these systems in addressing challenges faced by patients and nurses. As in previous years, the organizers have made data and tools available for future research and development.

AB - This paper reports on the 3rd CLEFeHealth evaluation lab, which continues our evaluation resource building activities for the medical domain. In this edition of the lab, we focus on easing patients and nurses in authoring, understanding, and accessing eHealth information. The 2015 CLEFeHealth evaluation lab was structured into two tasks, focusing on evaluating methods for information extraction (IE) and information retrieval (IR). The IE task introduced two new challenges. Task 1a focused on clinical speech recognition of nursing handover notes; Task 1b focused on clinical named entity recognition in languages other than English, specifically French. Task 2 focused on the retrieval of health information to answer queries issued by general consumers seeking information to understand their health symptoms or conditions. The number of teams registering their interest was 47 in Tasks 1 (2 teams in Task 1a and 7 teams in Task 1b) and 53 in Task 2 (12 teams) for a total of 20 unique teams. The best system recognized 4, 984 out of 6, 818 test words correctly and generated 2, 626 incorrect words (i.e., 38.5% error) in Task 1a; had the F-measure of 0.756 for plain entity recognition, 0.711 for normalized entity recognition, and 0.872 for entity normalization in Task 1b; and resulted in P@10 of 0.5394 and nDCG@10 of 0.5086 in Task 2. These results demonstrate the substantial community interest and capabilities of these systems in addressing challenges faced by patients and nurses. As in previous years, the organizers have made data and tools available for future research and development.

KW - Evaluation

KW - Information extraction

KW - Information retrieval

KW - Medical informatics

KW - Nursing records

KW - Patient handoff/handover

KW - Self-diagnosis

KW - Speech recognition

KW - Test-set generation

KW - Text classification

KW - Text segmentation

UR - http://www.scopus.com/inward/record.url?scp=84945964824&partnerID=8YFLogxK

UR - http://www.mendeley.com/research/overview-clef-ehealth-evaluation-lab-2015-1

U2 - 10.1007/978-3-319-24027-5_44

DO - 10.1007/978-3-319-24027-5_44

M3 - Conference contribution

SN - 9783319240268

VL - 9283

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 429

EP - 443

BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction

A2 - Mothe, Josiane

A2 - Savoy, Jacques

A2 - Kamps, Jaap

A2 - Pinel-Sauvagnat, Karen

A2 - Jones, Gareth

A2 - SanJuan, Eric

A2 - Cappellato, Linda

A2 - Ferro, Nicola

PB - Springer

CY - Cham, Switzerland

ER -

Goeuriot L, Kelly L, Suominen H, Hanlen L, Nèvèol A, Grouin C et al. Overview of the CLEF eHealth evaluation lab 2015. In Mothe J, Savoy J, Kamps J, Pinel-Sauvagnat K, Jones G, SanJuan E, Cappellato L, Ferro N, editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction: 6th International Conference of the CLEF Association, CLEF 2015, Proceedings. Vol. 9283. Cham, Switzerland: Springer. 2015. p. 429-443. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-24027-5_44