Abstract
Objective: The mixed methods appraisal tool (MMAT) was developed for critically appraising different study designs. This study aimed to improve the content validity of three of the five categories of studies in the MMAT by identifying relevant methodological criteria for appraising the quality of qualitative, survey, and mixed methods studies. Study Design and Setting: First, we performed a literature review to identify critical appraisal tools and extract methodological criteria. Second, we conducted a two-round modified e-Delphi technique. We asked three method-specific panels of experts to rate the relevance of each criterion on a five-point Likert scale. Results: A total of 383 criteria were extracted from 18 critical appraisal tools and a literature review on the quality of mixed methods studies, and 60 were retained. In the first and second rounds of the e-Delphi, 73 and 56 experts participated, respectively. Consensus was reached for six qualitative criteria, eight survey criteria, and seven mixed methods criteria. These results led to modifications of eight of the 11 MMAT (version 2011) criteria. Specifically, we reformulated two criteria, replaced four, and removed two. Moreover, we added six new criteria. Conclusion: Results of this study led to improve the content validity of this tool, revise it, and propose a new version (MMAT version 2018).
Original language | English |
---|---|
Pages (from-to) | 49-59 |
Number of pages | 12 |
Journal | Journal of Clinical Epidemiology |
Volume | 111 |
DOIs | |
Publication status | Published - 2019 |
Access to Document
Other files and links
Fingerprint
Dive into the research topics of 'Improving the content validity of the mixed methods appraisal tool: A modified e-Delphi study'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: Journal of Clinical Epidemiology, Vol. 111, 2019, p. 49-59.
Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - Improving the content validity of the mixed methods appraisal tool
T2 - A modified e-Delphi study
AU - Hong, Quan Nha
AU - Pluye, Pierre
AU - Fàbregues, Sergi
AU - Bartlett, Gillian
AU - Boardman, Felicity
AU - Cargo, Margaret
AU - Dagenais, Pierre
AU - Gagnon, Marie Pierre
AU - Griffiths, Frances
AU - Nicolau, Belinda
AU - O'Cathain, Alicia
AU - Rousseau, Marie Claude
AU - Vedel, Isabelle
N1 - Funding Information: Conflict of interest statement: Quan Nha Hong, OT, MSc, PhD. This manuscript was written while she was a PhD candidate and held a Doctoral Fellowship Award from the Canadian Institutes of Health Research (CIHR). Pierre Pluye, MD, PhD, Full Professor, holds a Senior Investigator Award from the Fonds de recherche du Québec–Santé (FRQS) and is the Director of the Methodological Development Platform of the Quebec-SPOR SUPPORT Unit, which is funded by the CIHR, the FRQS, and the Quebec Ministry of Health. Funding Information: The research team would like to acknowledge and sincerely thank all the e-Delphi panel experts for their contributions. Here are the names of the participants who wished to be acknowledged: Lesley Andres (University of British Columbia, Canada); Theodore Bartholomew (Purdue University, United States); Pat Bazeley (Research Support/University of New South Wales, Australia); Jelke Bethlehem (Leiden University, Netherlands); Paul Biemer (RTI International, United States); Jaak Billiet (University of Leuven, Belgium); Felicity Bishop (University of Southampton, England); Jörg Blasius (University of Bonn, Germany); Hennie Boeije (University of Utrecht, Netherlands); Jonathan Burton (Understanding Society, England); Kathy Charmaz (Sonoma State University, United States); Benjamin Crabtree (The State University of New Jersey, United States); Elizabeth Creamer (Virginia Tech University, United States); Edith de Leeuw (University of Utrecht, Netherlands); Claire Durand (Université de Montréal, Canada); Joan Eakin (University of Toronto, Canada); Michèle Ernst Stähli (Université de Lausanne, Switzerland); Michael Fetters (University of Michigan Medical School, United States); Nigel Fielding (University of Surrey, England); Rory Fitzgerald (University of London, England); Floyd Fowler (University of Massachusetts, United States); Dawn Freshwater (University of Western Australia, Australia); Jennifer Greene (University of Illinois at Urbana-Champaign, United States); Christina Gringeri (University of Utah, United States); Greg Guest (FHI 360, United States); Timothy Guetterman (University of Michigan Medical School, United States); Muhammad Hadi (University of Leeds, England); Elizabeth Halcomb (University of Wollongong, United States); Carolyn Heinrich (Vanderbilt University, United States); Sharlene Hesse-Biber (Boston College, United States); Mieke Heyvaert (University of Leuven, Belgium); John Hitchcock (Indiana University Bloomington, United States); Nataliya Ivankova (University of Alabama at Birmingham, United States); Laura Johnson (Northern Illinois University, United States); Paul Lavrakas (University of Chicago, United States); Marilyn Lichtman (Virginia Tech University, United States); Geert Loosveldt (University of Leuven, Belgium); Peter Lynn (University of Essex, England); Mary Ellen Macdonald (McGill University, Canada); Claire Howell Major (University of Alabama, United States); Maria Mayan (University of Alberta, Canada); Sharan Merriam (University of Georgia, United States); José Molina-Azorín (University of Alicante, Spain); David Morgan (Portland State University, United States); Peter Nardi (Pitzer College, United States); Katrin Niglas (Tallinn University, Estonia); Karin Olson (University of Alberta, Canada); Antigoni Papadimitriou (Johns Hopkins University, United States); Michael Quinn Patton (Independent organizational development and program evaluation consultant, United States); Rogério Meireles Pinto (Columbia University School of Social Work, United States); Vicki Plano Clark (University of Cincinnati, United States); David Plowright (University of Hull, England); Blake Poland (University of Toronto, Canada); Rodney Reynolds (California Lutheran University, United States); Gretchen B. Rossman (University of Massachusetts Amherst, United States); Erin Ruel (Georgia State University, United States); Michael Saini (University of Toronto, Canada); Johnny Saldaña (Arizona State University, United States); Joanna Sale (Li Ka Shing Knowledge Institute, Canada); Karen Schifferdecker (Dartmouth College, United States); David Silverman (University of London, England); Ineke Stoop (Netherlands Institute for Social Research, Netherlands); Sally Thorne (University of British Columbia, Canada); Sarah Tracy (Arizona State University, United States); Frederick Wertz (Fordham University, United States). The authors gratefully acknowledge the sponsorship from the Method Development platform of the Québec SPOR SUPPORT Unit (#BRDV-CIHR-201-2014-05), the CIHR Doctoral Fellowship Award (#301011), and the FRSQ Senior Investigator Award (#29308). Funding Information: The research team would like to acknowledge and sincerely thank all the e-Delphi panel experts for their contributions. Here are the names of the participants who wished to be acknowledged: Lesley Andres (University of British Columbia, Canada); Theodore Bartholomew (Purdue University, United States); Pat Bazeley (Research Support/University of New South Wales, Australia); Jelke Bethlehem (Leiden University, Netherlands); Paul Biemer (RTI International, United States); Jaak Billiet (University of Leuven, Belgium); Felicity Bishop (University of Southampton, England); J?rg Blasius (University of Bonn, Germany); Hennie Boeije (University of Utrecht, Netherlands); Jonathan Burton (Understanding Society, England); Kathy Charmaz (Sonoma State University, United States); Benjamin Crabtree (The State University of New Jersey, United States); Elizabeth Creamer (Virginia Tech University, United States); Edith de Leeuw (University of Utrecht, Netherlands); Claire Durand (Universit? de Montr?al, Canada); Joan Eakin (University of Toronto, Canada); Mich?le Ernst St?hli (Universit? de Lausanne, Switzerland); Michael Fetters (University of Michigan Medical School, United States); Nigel Fielding (University of Surrey, England); Rory Fitzgerald (University of London, England); Floyd Fowler (University of Massachusetts, United States); Dawn Freshwater (University of Western Australia, Australia); Jennifer Greene (University of Illinois at Urbana-Champaign, United States); Christina Gringeri (University of Utah, United States); Greg Guest (FHI 360, United States); Timothy Guetterman (University of Michigan Medical School, United States); Muhammad Hadi (University of Leeds, England); Elizabeth Halcomb (University of Wollongong, United States); Carolyn Heinrich (Vanderbilt University, United States); Sharlene Hesse-Biber (Boston College, United States); Mieke Heyvaert (University of Leuven, Belgium); John Hitchcock (Indiana University Bloomington, United States); Nataliya Ivankova (University of Alabama at Birmingham, United States); Laura Johnson (Northern Illinois University, United States); Paul Lavrakas (University of Chicago, United States); Marilyn Lichtman (Virginia Tech University, United States); Geert Loosveldt (University of Leuven, Belgium); Peter Lynn (University of Essex, England); Mary Ellen Macdonald (McGill University, Canada); Claire Howell Major (University of Alabama, United States); Maria Mayan (University of Alberta, Canada); Sharan Merriam (University of Georgia, United States); Jos? Molina-Azor?n (University of Alicante, Spain); David Morgan (Portland State University, United States); Peter Nardi (Pitzer College, United States); Katrin Niglas (Tallinn University, Estonia); Karin Olson (University of Alberta, Canada); Antigoni Papadimitriou (Johns Hopkins University, United States); Michael Quinn Patton (Independent organizational development and program evaluation consultant, United States); Rog?rio Meireles Pinto (Columbia University School of Social Work, United States); Vicki Plano Clark (University of Cincinnati, United States); David Plowright (University of Hull, England); Blake Poland (University of Toronto, Canada); Rodney Reynolds (California Lutheran University, United States); Gretchen B. Rossman (University of Massachusetts Amherst, United States); Erin Ruel (Georgia State University, United States); Michael Saini (University of Toronto, Canada); Johnny Salda?a (Arizona State University, United States); Joanna Sale (Li Ka Shing Knowledge Institute, Canada); Karen Schifferdecker (Dartmouth College, United States); David Silverman (University of London, England); Ineke Stoop (Netherlands Institute for Social Research, Netherlands); Sally Thorne (University of British Columbia, Canada); Sarah Tracy (Arizona State University, United States); Frederick Wertz (Fordham University, United States). The authors gratefully acknowledge the sponsorship from the Method Development platform of the Qu?bec SPOR SUPPORT Unit (#BRDV-CIHR-201-2014-05), the CIHR Doctoral Fellowship Award (#301011), and the FRSQ Senior Investigator Award (#29308). Conflict of interest statement: Quan Nha Hong, OT, MSc, PhD. This manuscript was written while she was a PhD candidate and held a Doctoral Fellowship Award from the Canadian Institutes of Health Research (CIHR). Pierre Pluye, MD, PhD, Full Professor, holds a Senior Investigator Award from the Fonds de recherche du Qu?bec?Sant? (FRQS) and is the Director of the Methodological Development Platform of the Quebec-SPOR SUPPORT Unit, which is funded by the CIHR, the FRQS, and the Quebec Ministry of Health. Publisher Copyright: © 2019 The Authors
PY - 2019
Y1 - 2019
N2 - Objective: The mixed methods appraisal tool (MMAT) was developed for critically appraising different study designs. This study aimed to improve the content validity of three of the five categories of studies in the MMAT by identifying relevant methodological criteria for appraising the quality of qualitative, survey, and mixed methods studies. Study Design and Setting: First, we performed a literature review to identify critical appraisal tools and extract methodological criteria. Second, we conducted a two-round modified e-Delphi technique. We asked three method-specific panels of experts to rate the relevance of each criterion on a five-point Likert scale. Results: A total of 383 criteria were extracted from 18 critical appraisal tools and a literature review on the quality of mixed methods studies, and 60 were retained. In the first and second rounds of the e-Delphi, 73 and 56 experts participated, respectively. Consensus was reached for six qualitative criteria, eight survey criteria, and seven mixed methods criteria. These results led to modifications of eight of the 11 MMAT (version 2011) criteria. Specifically, we reformulated two criteria, replaced four, and removed two. Moreover, we added six new criteria. Conclusion: Results of this study led to improve the content validity of this tool, revise it, and propose a new version (MMAT version 2018).
AB - Objective: The mixed methods appraisal tool (MMAT) was developed for critically appraising different study designs. This study aimed to improve the content validity of three of the five categories of studies in the MMAT by identifying relevant methodological criteria for appraising the quality of qualitative, survey, and mixed methods studies. Study Design and Setting: First, we performed a literature review to identify critical appraisal tools and extract methodological criteria. Second, we conducted a two-round modified e-Delphi technique. We asked three method-specific panels of experts to rate the relevance of each criterion on a five-point Likert scale. Results: A total of 383 criteria were extracted from 18 critical appraisal tools and a literature review on the quality of mixed methods studies, and 60 were retained. In the first and second rounds of the e-Delphi, 73 and 56 experts participated, respectively. Consensus was reached for six qualitative criteria, eight survey criteria, and seven mixed methods criteria. These results led to modifications of eight of the 11 MMAT (version 2011) criteria. Specifically, we reformulated two criteria, replaced four, and removed two. Moreover, we added six new criteria. Conclusion: Results of this study led to improve the content validity of this tool, revise it, and propose a new version (MMAT version 2018).
KW - Delphi technique
KW - Mixed methods research
KW - Qualitative research
KW - Quality appraisal
KW - Surveys
KW - Systematic review
UR - http://www.scopus.com/inward/record.url?scp=85064313713&partnerID=8YFLogxK
UR - http://www.mendeley.com/research/improving-content-validity-mixed-methods-appraisal-tool-modified-edelphi-study
U2 - 10.1016/j.jclinepi.2019.03.008
DO - 10.1016/j.jclinepi.2019.03.008
M3 - Article
C2 - 30905698
AN - SCOPUS:85064313713
SN - 0895-4356
VL - 111
SP - 49
EP - 59
JO - Journal of Clinical Epidemiology
JF - Journal of Clinical Epidemiology
ER -