Abstract
Best practice for clinical handover and its documentation recommends standardized, structured, and synchronous processes with patient involvement. Cascaded speech recognition (SR) and information extraction could support their compliance and release clinicians' time from writing documents to patient interaction and education. However, high requirements for processing correctness evoke methodological challenges. First, multiple people speak clinical jargon in the presence of background noise with limited possibilities for SR personalization. Second, errors multiply in cascading and hence, SR correctness needs to be carefully evaluated as meeting the requirements. This overview paper reports on how these issues were addressed in a shared task of the eHealth evaluation lab of the Conference and Labs of the Evaluation Forum in 2015. The task released 100 synthetic handover documents for training and another 100 documents for testing in both verbal and written formats. It attracted 48 team registrations, 21 email confirmations, and four method submissions by two teams. The submissions were compared against a leading commercial SR engine and simple majority baseline. Although this engine performed significantly better than any submission [i.e., 38.5 vs. 52.8 test error percentage of the best submission with the Wilcoxon signed-rank test value of 302.5 (p < 10-12)], the releases of data, tools, and evaluations contribute to the body of knowledge on the task difficulty and method suitability.
Original language | English |
---|---|
Title of host publication | the eHealth evaluation lab of the Conference and Labs of the Evaluation Forum in 2015 |
Subtitle of host publication | 16th Conference and Labs of the Evaluation Forum, CLEF 2015 |
Editors | Linda Cappellato, Nicola Ferro, Gareth J.F. Jones, Eric San Juan |
Place of Publication | Toulouse, France |
Publisher | CEUR Workshop Proceedings |
Pages | 1-18 |
Number of pages | 18 |
Volume | 1391 |
Publication status | Published - 8 Sept 2015 |
Event | 6th International Conference on Labs of the Evaluation Forum, CLEF 2015 - Toulouse, Toulouse, France Duration: 8 Sept 2015 → 11 Sept 2015 http://clef2015.clef-initiative.eu/publications.php |
Publication series
Name | CLEF2015 Working Notes |
---|---|
Publisher | CEUR Worshop Proceedings |
Volume | 1391 |
ISSN (Print) | 1613-0073 |
Conference
Conference | 6th International Conference on Labs of the Evaluation Forum, CLEF 2015 |
---|---|
Abbreviated title | CLEF 2015 |
Country/Territory | France |
City | Toulouse |
Period | 8/09/15 → 11/09/15 |
Other | CLEF 2015 is the sixth CLEF conference continuing the popular CLEF campaigns which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. Building on the format first introduced in 2010, CLEF 2015 consists of an independent peer-reviewed conference on a broad range of issues in the fields of multilingual and multimodal information access evaluation, and a set of labs and workshops designed to test different aspects of mono and cross-language Information retrieval systems. Together, the conference and the lab series will maintain and expand upon the CLEF tradition of community-based evaluation and discussion on evaluation issues |
Internet address |