Skip to main content

Table 7 Classification of evidence assessment items used by NPDC/NCAEP for studies with group and single-subject designs

From: Evaluation parameters for evidence-based practices for people with autism spectrum disorder: a narrative review of group and single-subject design studies

NPDC (2010, 2014) NCAEP (2020)
Group design quality indicators Single-case design quality indicators
1. Does the study have experimental and control/comparative (comparison) groups? 1. Does the dependent variable align with the research question or purpose of the study?
2. Were appropriate procedures used to increase the likelihood that relevant characteristics of participants in the sample were comparable across conditions? To meet this standard, one of the following criteria must be met: (a) participants were randomly assigned across study conditions, (b) participants were matched on key demographic variables, OR (c) researchers statistically controlled for effects of differing key variables to ensure equivalence of groups 2. Was the dependent variable clearly defined such that another person could identify an occurrence or nonoccurrence of the response?
3. Was there evidence for adequate reliability of key outcome measures? And/or when relevant, was inter-observer reliability assessed and reported at an acceptable level? 3. Does the measurement system align with the dependent variable and produce a quantifiable index?
4. Were outcomes for capturing the intervention’s effect measured at appropriate times (at least pre- and posttest)? 4. Did a secondary observer collect data on the dependent variable for at least 20% of sessions across conditions? Reviewers can select the not reported checkbox answer.
5. Was the intervention described and specified clearly enough that critical aspects could be understood? That it could be replicated by another interventionist? 5. Was mean interobserver agreement (IOA) 80% or greater OR kappa of 0.60 or greater?
6. Was the control/comparison condition(s) described? 6. Is the independent variable described with enough information to allow for a clear understanding about the critical differences between the baseline and intervention conditions, or were references to other materials used if description does not allow for a clear understanding?
7. Were data analysis techniques appropriately linked to key research questions and hypotheses? 7. Was the baseline described in a manner that allows for a clear understanding of the differences between baseline and intervention conditions? The reviewerscan select not reported for ATDs only.
8. Was attrition NOT a significant threat to internal validity? — ( ) Not reported checkbox answer 8. Are the results displayed in graphical format showing repeated measures for a single case (e.g., behavior, participant, group) across time?
9. Does the research report statistically significant effects of the practice for individuals with ASD for at least one outcome variable? 9. Do the results demonstrate changes in the dependent variable when the independent variable is manipulated by the experimenter at three different points in time or across three-phase repetitions? For ATD, there must be at least 4 repetitions of alternating sequence. For changing criterion, there should be baseline data plus three intervention phases.
10. Were the measures of effect attributed to the intervention? (No obvious unaccounted confounding factors)  
  1. Table 7 was adapted from the assessment forms on the quality of evidence presented in Appendix 2 of NPDC’s synthesis report (2014) and Appendix 1 of NCAEP’s report (2020). Boldface text refers to the contents present only in NCAEP’s (2020) version; ATD, alternating treatment design. Instructions for filling the form have been described only on NPDC’s (2014) report (“instructions: read each item and check the appropriate box. If you check “NO” at any time, the article will not be included as evidence for a practice”). In the NCAEP’s (2020) version, a checkbox with the answer “Not reported” was included in some items