In my last post, I talked about the new report out from the National Professional Development Center on Autism Spectrum Disorders with a new review of evidence-based practices in autism. I want to take a brief....I promise I will make it brief...post to talk about what the criteria was for a practice to be put in the evidence-based column. (If you don't feel like reading the whole thing, you can skip to the bottom for the Practice Point). They reviewed articles between 1990 and 2011 so they covered a 20-year span, which was more expansive than past reviews and extended it. The report came out in 2014, so that gives you a good idea of how long it takes to do this kind of review.
So, what did it take for a study to be included in the analysis?
ParticipantsFirst, the studies had to focus on individuals with ASD. Many included individuals with multiple disabilities (intellectual disabilities was the highest), but the participants had to have a diagnosis of one of the autism spectrum disorders that were part of the diagnostic criteria while the study was conducted. The participants of the study had to be aged birth to 22 years of age, although they note that most of the research continues to focus on the elementary ages.
Types of InterventionsSecond, they focused on interventions that were behavior, educational, or developmental. So, medical, nutritional, and alternative / complementary approaches are not included. These were designed to be interventions that could be implemented in homes and schools. Thus, no diets, vitamins, hyperbaric chambers or dolphin therapies were reviewed or included.
Research DesignThird, the research had to be actual research studies (as I tell my students, it has to have a methods section) and it had to compare the intervention to a group or condition with no intervention or a different intervention. They included both group designs and single subject research designs. A group design could be experimental (e.g., random assignment to groups) or quasi-experimental or a regression discontinuity design that uses statistics to compare conditions to other groups. They also reviewed studies that used single subject designs in which the participant's performance or behavior is compared between an intervention condition and a baseline or non-treatment condition. Single subject designs are carefully designed with specific procedures so they are more stringent than just a case study.
In order to qualify as an evidence-based practice, an intervention had to have one of the following.
- At least 2 well-designed experimental or quasi-experimental group designs that were conducted by at least 2 researchers or research groups.
- At least 5 well-designed single subject designs conducted by at least 3 different research groups and encompassing a total of at least 20 participants across all the studies. Single subject designs typically only have 1 to 3 participants, so you decide that you can generalize their results when they have been replicated across enough participants.
- A combination of at least one well-designed experimental / quasi-experimental group designs and at least 3 single subject designs conducted by at least 2 different research groups.
So, using this criteria, and a combination of reviewers from the NPDC and 159 outside reviewers who were trained to specific criteria, they narrowed down from 29,105 articles they initially found to 456 articles that were included in the evidence base. For more information on the process they used, I highly recommend reading the report and also you can see the tools they used to train the reviewers, which I think are excellent.
From this they identified 24 evidence-based practices that we will talk about in the next post (or few posts depending on how carried away I get) and also identified some practices that have research to support them but did not meet the strict criteria noted above. I will talk about those in future posts and then finish up with a review of what all this means.
Practice PointEssentially, the key point of this post is that the process they used was based on "industry standards" for lack of a better term. They incorporated strategies for the review that have been used in other evidence-based assessments and followed the guidelines put forth by the educational field. The report itself has fact sheets and descriptions of each of the 24 evidence-based practice that are a good starting point for planning interventions. While most of us won't implement just one of these interventions and one intervention doesn't address all the needs of the students in your class, these are practices that we can use as a starting point to develop individual programs. Then our data collection strategies to determine if this evidence-based intervention that is effective for this particular student.
So, until next time,
This series of posts is based on the new NPDC's report.
Wong, C., Odom, S. L., Hume, K. Cox, A. W., Fettig, A., Kucharczyk, S., ... Schultz, T. R. (2013). Evidence-based practices for children, youth, and young ddults with Autism Spectrum Disorder. Chapel Hill: The University of North Carolina, Frank Porter Graham Child Development Institute, Autism Evidence-Based Practice Review Group.
This report is available online at