Task 1a of the CLEF eHealth Evaluation Lab 2015

Hanna Suominen, Leif Hanlen, Lorraine Goeuriot, Liadh Kelly, Gareth J.F. Jones

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

4 Citations (Scopus)

Abstract

Best practice for clinical handover and its documentation recommends standardized, structured, and synchronous processes with patient involvement. Cascaded speech recognition (SR) and information extraction could support their compliance and release clinicians' time from writing documents to patient interaction and education. However, high requirements for processing correctness evoke methodological challenges. First, multiple people speak clinical jargon in the presence of background noise with limited possibilities for SR personalization. Second, errors multiply in cascading and hence, SR correctness needs to be carefully evaluated as meeting the requirements. This overview paper reports on how these issues were addressed in a shared task of the eHealth evaluation lab of the Conference and Labs of the Evaluation Forum in 2015. The task released 100 synthetic handover documents for training and another 100 documents for testing in both verbal and written formats. It attracted 48 team registrations, 21 email confirmations, and four method submissions by two teams. The submissions were compared against a leading commercial SR engine and simple majority baseline. Although this engine performed significantly better than any submission [i.e., 38.5 vs. 52.8 test error percentage of the best submission with the Wilcoxon signed-rank test value of 302.5 (p < 10-12)], the releases of data, tools, and evaluations contribute to the body of knowledge on the task difficulty and method suitability.

Original languageEnglish
Title of host publicationthe eHealth evaluation lab of the Conference and Labs of the Evaluation Forum in 2015
Subtitle of host publication16th Conference and Labs of the Evaluation Forum, CLEF 2015
EditorsLinda Cappellato, Nicola Ferro, Gareth J.F. Jones, Eric San Juan
Place of PublicationToulouse, France
PublisherCEUR Workshop Proceedings
Pages1-18
Number of pages18
Volume1391
Publication statusPublished - 8 Sept 2015
Event6th International Conference on Labs of the Evaluation Forum, CLEF 2015 - Toulouse, Toulouse, France
Duration: 8 Sept 201511 Sept 2015
http://clef2015.clef-initiative.eu/publications.php

Publication series

NameCLEF2015 Working Notes
PublisherCEUR Worshop Proceedings
Volume1391
ISSN (Print)1613-0073

Conference

Conference6th International Conference on Labs of the Evaluation Forum, CLEF 2015
Abbreviated titleCLEF 2015
Country/TerritoryFrance
CityToulouse
Period8/09/1511/09/15
OtherCLEF 2015 is the sixth CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks.

Building on the format first introduced in 2010, CLEF 2015 consists of an independent peer-reviewed conference on a broad range of issues in the fields of multilingual and multimodal information access evaluation, and a set of labs and workshops designed to test different aspects of mono and cross-language Information retrieval systems. Together, the conference and the lab series will maintain and expand upon the CLEF tradition of community-based evaluation and discussion on evaluation issues
Internet address

Fingerprint

Dive into the research topics of 'Task 1a of the CLEF eHealth Evaluation Lab 2015'. Together they form a unique fingerprint.

Cite this