An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2

Fiona Cotton, Catherine Wilson

    Research output: A Conference proceeding or a Chapter in BookChapter

    Abstract

    The study investigated whether examiners find the marking of coherence and cohesion (CC) in the IELTS Academic Writing Task 2 more difficult than the marking of the other criteria; what features of CC examiners are looking for in marking Academic Writing Task 2; the extent to which they differ in their marking of CC compared to their marking of the other criteria; whether qualifications and experience had an impact on assessment reliability; and how much current examiner training materials clarify understandings of CC. The study involved think-aloud protocols and follow-up interviews with 12 examiners marking a set of 10 scripts, and a quantitative study with 55 examiners marking 12 scripts and completing a followup questionnaire. The quantitative data revealed that examiner reliability was within the acceptable range for all four criteria. The marking of CC was slightly less reliable than the marking of Grammatical Range and Accuracy and Lexical Resource, but not significantly different to Task Response. No significant effects could be found for examiners’ qualifications or experience, which suggests that the training is effective. The findings showed that examiners found the marking of CC more difficult than the other criteria. Examiners were conscientious in applying the band descriptors and used the terminology of the descriptors for CC most of the time. They also introduced other terms not explicitly used in the CC descriptors, such as ‘flow’, ‘structure’ and ‘linking words’, as well as the terms, ‘essay’, ‘introduction’ ‘conclusion’ and ‘topic sentence’. The introduction of terms such as these, together with variation in the degree to which examiners focused on particular features of CC, has implications for the construct validity of the test. Suggestions for improving the construct validity include: possible fine tuning of the CC band descriptors; clarification of the expected rhetorical genre; further linguistic research to provide detailed analysis of CC in sample texts; and refinements to the training materials, including a glossary of key terms and sample scripts showing all cohesive ties.
    Original languageEnglish
    Title of host publicationIELTS Research Reports Volume 12
    EditorsJenny Osborne
    Place of PublicationMelbourne
    PublisherIDP: IELTS Australia and British Council
    Pages235-310
    Number of pages76
    Volume12
    ISBN (Print)9780977587599
    Publication statusPublished - 2011

    Fingerprint

    examiner
    group cohesion
    rating
    construct validity
    qualification
    coherence
    technical language
    genre
    experience
    linguistics

    Cite this

    Cotton, F., & Wilson, C. (2011). An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2. In J. Osborne (Ed.), IELTS Research Reports Volume 12 (Vol. 12, pp. 235-310). Melbourne: IDP: IELTS Australia and British Council.
    Cotton, Fiona ; Wilson, Catherine. / An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2. IELTS Research Reports Volume 12. editor / Jenny Osborne. Vol. 12 Melbourne : IDP: IELTS Australia and British Council, 2011. pp. 235-310
    @inbook{26c8dba846744466835ddd739568e666,
    title = "An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2",
    abstract = "The study investigated whether examiners find the marking of coherence and cohesion (CC) in the IELTS Academic Writing Task 2 more difficult than the marking of the other criteria; what features of CC examiners are looking for in marking Academic Writing Task 2; the extent to which they differ in their marking of CC compared to their marking of the other criteria; whether qualifications and experience had an impact on assessment reliability; and how much current examiner training materials clarify understandings of CC. The study involved think-aloud protocols and follow-up interviews with 12 examiners marking a set of 10 scripts, and a quantitative study with 55 examiners marking 12 scripts and completing a followup questionnaire. The quantitative data revealed that examiner reliability was within the acceptable range for all four criteria. The marking of CC was slightly less reliable than the marking of Grammatical Range and Accuracy and Lexical Resource, but not significantly different to Task Response. No significant effects could be found for examiners’ qualifications or experience, which suggests that the training is effective. The findings showed that examiners found the marking of CC more difficult than the other criteria. Examiners were conscientious in applying the band descriptors and used the terminology of the descriptors for CC most of the time. They also introduced other terms not explicitly used in the CC descriptors, such as ‘flow’, ‘structure’ and ‘linking words’, as well as the terms, ‘essay’, ‘introduction’ ‘conclusion’ and ‘topic sentence’. The introduction of terms such as these, together with variation in the degree to which examiners focused on particular features of CC, has implications for the construct validity of the test. Suggestions for improving the construct validity include: possible fine tuning of the CC band descriptors; clarification of the expected rhetorical genre; further linguistic research to provide detailed analysis of CC in sample texts; and refinements to the training materials, including a glossary of key terms and sample scripts showing all cohesive ties.",
    author = "Fiona Cotton and Catherine Wilson",
    year = "2011",
    language = "English",
    isbn = "9780977587599",
    volume = "12",
    pages = "235--310",
    editor = "Jenny Osborne",
    booktitle = "IELTS Research Reports Volume 12",
    publisher = "IDP: IELTS Australia and British Council",

    }

    Cotton, F & Wilson, C 2011, An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2. in J Osborne (ed.), IELTS Research Reports Volume 12. vol. 12, IDP: IELTS Australia and British Council, Melbourne, pp. 235-310.

    An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2. / Cotton, Fiona; Wilson, Catherine.

    IELTS Research Reports Volume 12. ed. / Jenny Osborne. Vol. 12 Melbourne : IDP: IELTS Australia and British Council, 2011. p. 235-310.

    Research output: A Conference proceeding or a Chapter in BookChapter

    TY - CHAP

    T1 - An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2

    AU - Cotton, Fiona

    AU - Wilson, Catherine

    PY - 2011

    Y1 - 2011

    N2 - The study investigated whether examiners find the marking of coherence and cohesion (CC) in the IELTS Academic Writing Task 2 more difficult than the marking of the other criteria; what features of CC examiners are looking for in marking Academic Writing Task 2; the extent to which they differ in their marking of CC compared to their marking of the other criteria; whether qualifications and experience had an impact on assessment reliability; and how much current examiner training materials clarify understandings of CC. The study involved think-aloud protocols and follow-up interviews with 12 examiners marking a set of 10 scripts, and a quantitative study with 55 examiners marking 12 scripts and completing a followup questionnaire. The quantitative data revealed that examiner reliability was within the acceptable range for all four criteria. The marking of CC was slightly less reliable than the marking of Grammatical Range and Accuracy and Lexical Resource, but not significantly different to Task Response. No significant effects could be found for examiners’ qualifications or experience, which suggests that the training is effective. The findings showed that examiners found the marking of CC more difficult than the other criteria. Examiners were conscientious in applying the band descriptors and used the terminology of the descriptors for CC most of the time. They also introduced other terms not explicitly used in the CC descriptors, such as ‘flow’, ‘structure’ and ‘linking words’, as well as the terms, ‘essay’, ‘introduction’ ‘conclusion’ and ‘topic sentence’. The introduction of terms such as these, together with variation in the degree to which examiners focused on particular features of CC, has implications for the construct validity of the test. Suggestions for improving the construct validity include: possible fine tuning of the CC band descriptors; clarification of the expected rhetorical genre; further linguistic research to provide detailed analysis of CC in sample texts; and refinements to the training materials, including a glossary of key terms and sample scripts showing all cohesive ties.

    AB - The study investigated whether examiners find the marking of coherence and cohesion (CC) in the IELTS Academic Writing Task 2 more difficult than the marking of the other criteria; what features of CC examiners are looking for in marking Academic Writing Task 2; the extent to which they differ in their marking of CC compared to their marking of the other criteria; whether qualifications and experience had an impact on assessment reliability; and how much current examiner training materials clarify understandings of CC. The study involved think-aloud protocols and follow-up interviews with 12 examiners marking a set of 10 scripts, and a quantitative study with 55 examiners marking 12 scripts and completing a followup questionnaire. The quantitative data revealed that examiner reliability was within the acceptable range for all four criteria. The marking of CC was slightly less reliable than the marking of Grammatical Range and Accuracy and Lexical Resource, but not significantly different to Task Response. No significant effects could be found for examiners’ qualifications or experience, which suggests that the training is effective. The findings showed that examiners found the marking of CC more difficult than the other criteria. Examiners were conscientious in applying the band descriptors and used the terminology of the descriptors for CC most of the time. They also introduced other terms not explicitly used in the CC descriptors, such as ‘flow’, ‘structure’ and ‘linking words’, as well as the terms, ‘essay’, ‘introduction’ ‘conclusion’ and ‘topic sentence’. The introduction of terms such as these, together with variation in the degree to which examiners focused on particular features of CC, has implications for the construct validity of the test. Suggestions for improving the construct validity include: possible fine tuning of the CC band descriptors; clarification of the expected rhetorical genre; further linguistic research to provide detailed analysis of CC in sample texts; and refinements to the training materials, including a glossary of key terms and sample scripts showing all cohesive ties.

    M3 - Chapter

    SN - 9780977587599

    VL - 12

    SP - 235

    EP - 310

    BT - IELTS Research Reports Volume 12

    A2 - Osborne, Jenny

    PB - IDP: IELTS Australia and British Council

    CY - Melbourne

    ER -

    Cotton F, Wilson C. An investigation of examiner rating of coherence and cohesion in the IELTS Academic Writing Task 2. In Osborne J, editor, IELTS Research Reports Volume 12. Vol. 12. Melbourne: IDP: IELTS Australia and British Council. 2011. p. 235-310