Meta-analysis Grading

Meta-analysis Grading Rubric

  1. Are the results of the study (studies) valid?
  2. What are the results?
  3. How can I apply these results to patient care?
    For example, in Chapter 4 of DiCenso, Guyatt, & Ciliska, considering health care
    interventions, questions that serve as triggers under “Are the results valid?” would
    include sub questions such as, Were patients randomized?; Was randomization
    concealed?; Were patients analyzed in the groups to which they were randomized?
    “What are the Results?” would trigger different probes (How large was the
    intervention effect; How precise was the estimate of the intervention effect?).
    “How can I apply the results to patient care?” would yet consider different probes, such
    as were the study patients similar to the patients in my clinical (administrative)
    practice?
    Most tools provide essentially the same triggers as described above, however, may ask
    the questions in a slightly different way.

Meta-analyses of Agreement between Diagnoses Made from Clinical Evaluations and

Standardized Diagnostic Interviews

Standardized diagnostic interviews (SDIs) are very essential in the operationalization
of the diagnostic criteria for the purpose of increasing the reliability and validity of diagnoses
(Clark et al., 2012; DerSimonian & Laird, 1986; Sutton et al., 2000). This is attributable to
the fact that, by having interviewers ask patients the same questions in a similar order
followed by the processing of the obtained answers through standardized algorithms.
The development of standardized diagnostic interviews (SDIs) was envisaged to
address the sources of errors that used to arise from unstandardized diagnostic interviews.
This is mainly because there are various sources of error variance that would be reduced
through standardized diagnostic interviews (SDIs) such as: information variance;
interpretation variance; as well as criterion variance. The significance of SDIs is to make sure
that appropriate diagnoses are obtained from different patients irrespective of the experience

ARTICLE CRITIQUE 2
level of interviewers (Ho et al., 2010). However, SDIs are typically classified as structured
versus semistructured where the former SDIs are used to precisely specify the rules and
questions for processing each response whereas the latter SDIs are used to permit more
flexible questions and probes thus requiring interviewers who are clinically trained.
Adequate test–retest and interrater reliability estimates have been reported for several
SDIs (Clark et al., 2012; Doi & Thalib, 2008; Rosenthal, 1991). Therefore, meta-analyses of
agreement between diagnoses made from clinical evaluations and standardized diagnostic
(SDIs) interviews are usually compared to check the extent of their agreement (DiCenso &
Guyatt, 2005b). This is mainly attributable to the fact that SDIs diagnoses findings are often
extrapolated to clinical evaluations making the meta-analyses to determine the agreement
between the two extremely important.

Appraisal of the problem/research questions

When developing a meta-synthesis, it is always essential for the author to include the
research problem or question that motivated the study (DiCenso & Guyatt, 2005a).
Moreover, it is also essential for the integration approach to be used in this study, the
phenomena or concepts found, and the significance of its utilization among clinicians or
nurses should also be included in the research problem or question (Sutton et al., 2000).
However, in this study that was conducted by Rettew et al. (2009), the authors did not state
the research question, but the research problem was highlighted since the research aimed to
investigate the agreement between diagnoses that are made from clinical evaluations and
SDIs. Thus, the problem that motivated the research was to investigate whether people would
receive the same diagnoses from clinical evaluations as from SDIs mainly because
extrapolation of SDI diagnoses to clinical practice has become a very common in hospitals
nowadays (Rettew et al., 2012). The essence of this problem statement is its importance in

ARTICLE CRITIQUE 3
helping the reader in understanding the purpose of the study as well as the information that
will be gathered (DiCenso & Guyatt, 2005b; Titler, 2006).
Appraisal of the search strategy

The research strategy that was used by the researchers was satisfactory because a
systematic review provides an exhaustive search of literature related to the topic of research
(Polit & Beck, 2012). Also according to Polit & Beck (2012) it is recommended for the
databases used in the research to be clearly stated by the researchers, something which has
been succinctly achieved in this study. This is mainly because the researchers conducted a
comprehensive database search of qualitative studies using several widely used and reliable
databases such as: Medline and PsycInfo (Rettew et al., 2012). The use these two reliable
databases for literature search was appropriate to gather sufficient information (Polit and
Beck, 2012). Since during the data search the articles included were only those published
between January 1, 1995 and December 31, 2006 it was undoubtedly evident that these 12
years were sufficient to yield a meta-analytic pool and recent enough to act as a true
reflection of the contemporary findings (Ho et al., 2010; Polit & Beck, 2012).
Moreover, keywords were used in the search and it included titles and acronyms for
the following SDIs: Diagnostic Interview Schedule for Children (DISC); Diagnostic
Interview for Children and Adolescents (DICA); Diagnostic Interview Schedule (DIS);
Composite International Diagnostic Interview (CIDI;) Structured Clinical Interview for DSM
(SCID); Child and Adolescent Psychiatric Assessment (CAPA); Development and Well-
Being Assessment (DAWBA); Schedules for Clinical Assessment in Neuropsychiatry
(SCAN); and Mini International Neuropsychiatric Interview (MINI). According to Polit &
Beck (2012) a list of keywords used in the search including the used titles and acronyms are

ARTICLE CRITIQUE 4
essential in allowing the reader to know how extensive and precise database search was
conducted.

Appraisal of the sample

From the literature search that was conducted there were 4956 articles that were
yielded (Rettew et al., 2012). However, after critical screening the researchers found out only
125 articles out of the initial articles reported administration of an SDI and a clinical
evaluation (Rettew et al., 2012). Moreover, additional 13 articles were yielded from the
reference sections of the initially selected articles making a total of 138 candidate articles
(Rettew et al., 2012). Moreover, the researchers did not include a description of the
participants in the studies of the articles included in their research. The articles were not
adequately comprehensive since it only included articles that were written in English
language, peer reviewed and those published within 12 years, that is, between January 1,
1995 and December 31, 2006 (Rettew et al., 2012). The inclusion and exclusion criteria were
clearly stated in the study (Rettew et al., 2012). There could have been more concise themes
within the data if a larger range of sources were compared (Polit & Beck, 2012).

Appraisal of the quality

It is highly essential that qualitative meta-analyses demonstrate efforts to enhance the
validity and reliability of the review (Rosenthal, 1991; Sutton et al., 2000). Moreover, each
study data in this meta-synthesis was coded independently by two authors, to ensure the
quality is improved. This was followed by the summarization of data using a narrative form,
which is appropriate for a descriptive meta-synthesis as well as making sure that the results
are still vigorous (Doi & Thalib, 2008).

ARTICLE CRITIQUE 5
Additionally, during the study it was ensured that the probands did not have
conditions that were likely to limit possibilities for interviews severely such as autism or IQ
below 50, since the focus of this study was on disorders apart from these. Also the in order to
ensure quality of the study was assured the reported kappas based on ≥ 40 probands were
assessed with an SDI and a clinical evaluation for the purpose of setting a lower limit for
statistical power. Furthermore, multiple diagnoses were assessed using the SDIs instead of
being limited to one diagnosis. Finally, in order to ensure that the study quality was
succinctly assured it made sure that included diagnoses were based on DSM-III, DSM-IV,
DSM-III-R, ICD-9, or ICD-10 criteria and ‘possible’ or subthreshold diagnoses were not
considered.

Appraisal of the data extraction

According to Polit & Beck (2012) in order for extraction of data from multiple
studies, combination and encoding of the data must proceed. This should then be followed by
the recoding of the information from all of the considered studies for comparison of language
of publication, and peer-review. This meta-analyses study provides a summary of basic data
on each of the 138 studies included in this research (Rettew et al., 2012). However,
characteristics of all the every study included this meta-analyses study were described in a
table, which is a key component of data extraction (DerSimonian & Laird, 1986). Also,
keywords were used in the data extraction and they included titles and acronyms.

Appraisal of the data analysis

Data analysis is very important in any research since it allows conclusions to be made.
For example, in this meta-analysis the researchers used averaging of kappas, aggregating of
diagnoses in various levels (level 1-4), meta-regressions and mean z’ kappas to determine
whether people would receive the same diagnoses from clinical evaluations (Polit & Beck,

ARTICLE CRITIQUE 6
2012). Moreover, heterogeneity is very important in data analysis since it can be used in the
determination of the possibility of conducting a meta-analysis from the review (DerSimonian
& Laird, 1986). However, despite the lack of sufficient heterogeneity in sample populations
(Rettew et al., 2012), the aim of the meta-analysis study was achieved. This implies that the
data analysis was sufficient to allow the researchers obtain the necessary meaning from the
extracted data.

Appraisal of the conclusions, limitations, and implications
This study aimed at determining whether people would receive the same diagnoses
from clinical evaluations as from SDIs mainly because extrapolation of SDI diagnoses to
clinical practice has become a very common in hospitals nowadays (Rettew et al., 2012).
However, irrespective of the fact that the study findings were valuable in addressing the
research problem, some critical limitations were noted. The choice of authors to only include
published, English written studies within a period of 12 years is a limitation. This is mainly
because it could be interpreted as publication bias and often referred to as bias against the
null hypothesis.
Moreover, the use of kappas failed to take account of the quantitative aspects of
agreements. This meta-analyses study has the implication of stimulating more research on
diagnostic agreement, especially utilizing multiple types and sources of data required for
psychopathology understanding. Furthermore, the meta-analyses show a clear indication that
clinical evaluations and SDIs often yield different diagnoses hence the research findings
obtained from SDIs cannot be automatically extrapolated to the clinical evaluations.

Conclusion

ARTICLE CRITIQUE 7
The study findings revealed that diagnoses yielded by SDIs and clinical evaluations
are different. This is an indication of an important clinical implication which is SDI
diagnoses should not continue to be extrapolated to clinical evaluations.

ARTICLE CRITIQUE 8

References

Clark, A.M., Savard, L.A., Spaling, M.A., Heath, S., Duncan, A.S. & Spiers, J.A. (2012).
Understanding help-seeking decisions in people with heart failure: A qualitative
systematic review. International Journal of Nursing Studies, 49(2), 1582–1597.
DerSimonian, R. & Laird, N. (1986). Meta-Analysis in Clinical Trials. Controlled Clinical
Trials, 7(3), 177-188.
DiCenso, A. & Guyatt, G. (2005a). Applying results to individual patients. In: A. DiCenso,
G. Guyatt, & D. Ciliska, Evidence-based nursing: a guide to clinical practice (pp.
481-489). St. Louis, MO: Elsevier Mosby.
DiCenso, A. & Guyatt, G. (2005b). Incorporating patient values. In: A. DiCenso, G. Guyatt,
& D. Ciliska, Evidence-based nursing: a guide to clinical practice (pp. 490-507). St.
Louis, MO: Elsevier Mosby.
Doi, S.A. & Thalib, L. (2008). A quality-effects model for meta-analysis. Epidemiology,
19(1), 94–100.
Ho, A.Y.K., Berggren, I. & Dahlborg-Lyckhage, E. (2010). Diabetes empowerment related to
Pender’s Health Promotion Model: A meta-synthesis. Nursing and Health Sciences,
12(2), 259–267.
Polit, D.F. & Beck, C.T. (2012). Nursing research: Generating and assessing evidence for
nursing practice (9 th ed.). Philadelphia, PA: Wolters Kluwer Health/Lippincott
Williams & Wilkins.
Rettew, D.C., Lynch, A.D., Achenbach, T.M., Dumenci, L. & Ivanova, M.Y. (2009). Meta-
analyses of agreement between diagnoses made from clinical evaluations and
standardized diagnostic interviews. International Journal of Methods in Psychiatric
Research, 18(3), 169–184.
Rosenthal, R. (1991). Meta-Analytic Procedures for Social Research. New York, NY: Sage
Publications.
Sutton, A. J., Jones, D.R., Abrams, K.R., Sheldon, T.A. & Song, F. (2000). Methods for
Meta-analysis in Medical Research. London: John Wiley.

ARTICLE CRITIQUE 9
Titler, M.G. (2006). Developing an evidence-based practice. In: G. LoBiondo-Wood & J.
Haber (Eds.), Nursing research: Methods and critical appraisal for evidence-based
practice (6th ed., pp. 439-481). St. Louis, Missouri: Mosby Elsevier.