Utility of the Crowe Critical Appraisal Tool:A clinician researcher’s perspective

Grace S. GRIFFITHS, PhD(c), MSc OT[1]

I recently faced a challenge. I wished to critically appraise and synthesise all the evidence on non-pharmacological therapies for a specific condition. I wanted to use one single tool to appraise all studies regardless of their research design, to enable direct comparison between studies of different design, facilitate integration, and enhance the richness of conclusions and clinical recommendations. What I needed was a single Critical Appraisal Tool (CAT) that was designed to appraise studies of diverse designs based on the quality of each individual study within its own methodological domain. In my quest for such a tool, I found that few allowed the scope of designs I wished to include, many had been developed for one-off use in specific research projects only, and few had gone through a validation or reliability process. Then I discovered the Crowe Critical Appraisal Tool (CCAT).

Proposed by Michael Crowe and Lorraine Sheppard, the CCAT was developed to address this very gap (Journal of Clinical Epidemiology, 2011a). Its content is based on a combination of seven reporting guidelines, research methods theory, and analysis of 44 existing CATs. Twenty-two items fall into eight distinct quality categories: Preliminaries, Introduction, Design, Sampling, Data collection, Ethical matters, Results, and Discussion. Items are marked as being present, absent but should be present, or not applicable based on the research design used in the study being appraised. Evidence must be stated in the study and cannot be assumed. The appraiser then scores each category based on the marked items plus their overall assessment of that category. Categories are scored on a scale from zero (no evidence) to five (highest evidence).

Quality was ensured across three stages when developing the CCAT, for which detail and rationale are described in a subsequent publication (Crowe & Sheppard, International Journal of Nursing Studies, 2011b). These stages were developing the scoring system and user guide, pre-testing and amending the tool where necessary, and testing the CCAT against 5 existing CATs previously tested for validity and reliability.

This latter process showed that all the categories used in the CCAT, except Preliminaries (formerly called Preamble), could be considered suitable for critical appraisal. However, there was insufficient reason to exclude this category because it could only be compared against one of the five alternative CATs.

Finally, reliability of the CCAT scores was assessed in 2012 with five participants (Crowe et al., 2012), using intraclass correlation coefficients to analyse consistency across test scores, alongside semi-structured interviews to gain participant feedback on the tool. Overall interrater reliability was moderate (above 0.7), with variation in scores likely due to participants’ familiarity or lack thereof with research designs and research topics (these were randomly assigned). Despite this, another study (Crowe et al., 2011) showed that use of the CCAT provided much better score reliability compared to informal appraisal alone, and reduced the influence of subject matter knowledge.

Since its inception, the CCAT has been used many times in integrative and systematic reviews to appraise quality in studies of diverse designs. A quick review of published studies showed that it has been used more than 60 times in the last year alone. It appears especially popular in the medical and allied health sciences, and commonly facilitates critical appraisal performed by a team of researchers.

I found the user guide clear and the tool straightforward. Like participants in the reliability study (Crowe et al., 2012), it was easiest to use at the ends of the research continuum (e.g. true experimental and narrative case study) and harder to implement with descriptive, exploratory, and observational designs (due perhaps to the variability within these designs). There was also greater variability between team scores with these designs, requiring more team discussion to ensure consensus. Given the design scope, validity, and moderate reliability of the CCAT regardless of background knowledge, this tool is a good option for clinicians and researchers seeking to perform rigorous and integrative systematic reviews where diverse research designs are included. Where the researcher has less experience in a topic or specific research design, I would recommend seeking experienced supervision, due to the subjective nature of ratings.

References

  • Crowe, M. & Sheppard, L. (2011a). A review of critical appraisal tools show they lack: Alternative tool structure is proposed. Journal of Clinical Epidemiology, 64(1), 79-89;

  • Crowe, M. & Sheppard, L. (2011b). A general critical appraisal tool: An evaluation of construct validity. International Journal of Nursing Studies, 48(12), 1505-1516;

  • Crowe, M., Sheppard, L. & Campbell, A. (2011). Comparison of the effects of using the Crowe Critical Appraisal Tool versus informal appraisal in assessing health research: A randomised trial. International Journal of Evidence-based Healthcare, 9(4), 444-449;

  • Crowe, M., Sheppard, L. & Campbell, A. (2012). Reliability analysis for a proposed critical appraisal tool demonstrated value for diverse research designs. Journal of Clinical Epidemiology, 65(4), 375-383.

[1] Department of Orthopaedic Surgery and Musculoskeletal; Medicine, University of Otago; 2, Riccarton Avenue, Christchurch 8011, New Zealand. Email: grace.griffiths@postgrad.otago.ac.nz

Précédent
Précédent

Inutile d’aller au bout du monde ! (N°69)

Suivant
Suivant

உறுதிமொழி