The original funding for developing and piloting the Special Education Research Accelerator (SERA) was provided by a grant from the Institute for Education Sciences (IES). Though the pandemic presented many obstacles, the project resulted in, among other accomplishments, the SERA website, which serves as a hub for crowdsourced research studies conducted by SERA; a network of approximately 370 SERA partners (special education researchers interested in conducting crowdsourced research) throughout the US; and a pilot RCT replication examining the effects of elaborative interrogation on the retention of science facts among elementary students with high-incidence disabilities across eight SERA research partners (we are currently writing up the results of that study, check our blog for a forthcoming summary). We learned a lot working with our partners in this project. One thing that stood out to us was that although we had developed infrastructure and procedures for crowdsourcing research studies across many researchers and sites, a similar process for crowdsourcing the planning of research does not yet exist. Therefore, we proposed (and were fortunate to receive funding for) an IES grant, which we’re referring to as SERA2, to expand SERA by developing and piloting procedures and supports for crowdsourcing the development of lines of inquiry to systematically investigate effect heterogeneity for the purpose of estimating generalizability boundaries.
Generalization requires understanding sources of variation that may amplify or dampen intervention effects (Stuart et al., 2011; Tipton, 2012; Tipton & Olsen, 2018). Cronbach identified four classes of contextual variables that could potentially affect the size of intervention effects. They include variations in unit (or participants), treatment (or versions of the intervention), outcome, and/or setting (UTOS) characteristics. For example, the effects of an intervention may vary for students with learning disabilities compared to students with autism, when implemented in small groups versus individually, when delivered with reading specialist versus a paraprofessional, and/or the combination of these characteristics combined.
To fully inform policy and practice about the effectiveness of programs and interventions, researchers should examine treatment effect heterogeneity across key learner populations, treatment variations, outcomes, and settings. It seems to us that this is just the type of information policymakers and practitioners want to know: Does this intervention work for students with autism? Does it work when implemented in small groups? However, researchers seldom design series of conceptual replication studies that systematically examine effect heterogeneity across key moderator variables. And if one researcher or research team were to design such a series of studies, it would take them many years if not decades to conduct studies examining all the possible combinations of key moderator variables to fully examine effect heterogeneity.
In the first stage of SERA2, we worked with a Consensus Panel to identify key moderator variables across which to examine effect heterogeneity for repeated reading, a commonly used intervention to improve reading performance for students with learning disabilities. The Consensus Panel included six experts in repeated reading and/or reading instruction for culturally and linguistically students with learning disabilities. Panel members attended a two-day meeting at the University of Virginia to develop an initial list of key moderator variables for repeated reading. We will then use this list of hypothesize moderators to design a series of conceptual replication studies to investigate systematic sources of effect heterogeneity for repeated for students with learning disabilities. Drs. Scott Ardoin (University of Georgia), Young-Suk Kim (University of California-Irvine), Endia Lindo (Texas Christian University), Michael Solis (University of California-Riverside), Elizabeth Stevens (University of Kansas), and Jade Wexler (University of Maryland) participated in a Nominal Group Technique to develop initial consensus on the most important moderator variables moderator variables. Nominal Group Techniques involve four stages: idea generation, nomination, discussion, and ranking. Day 1 ended with experts ranking the importance of nominated moderator variables in each of the UTOS categories. After sharing the results of the rankings, our group of experts re-nominated, re-discussed, and re-ranked key moderator variables for repeated reading in Day 2. The highest ranked variables in each UTOS category at the end of Day 2 were:
- Units: students with learning disabilities with low vs. high decoding skills
- Treatments: (a) difficulty of passages and (b) modelling skilled reading of passages (tie)
- Observations: type of oral reading fluency measure
- Settings: individual vs. group administration
Using these variables, co-PI Dr. Vivian Wong and Project Consultant Dr. Peter Steiner (University of Maryland) are developing a series of conceptual replication studies (an integrated replication design) to systematically investigate the effects of repeated reading across the many combinations of the levels of these variables. We will then be conducting focus groups with practitioners with experience teaching repeated reading and a broader group of researchers with expertise in reading intervention to garner feedback on the selected moderator variables and the draft integrated replication design. In the second stage of the project, we will involve SERA research partners to crowdsource piloting of selected studies in the integrated replication.
We will be providing progress updates as here as the project progresses and are excited about the potential of identifying key moderator variables for other commonly used interventions in special education, with the ultimate goal informing policy and practice by crowdsourcing studies across many research teams to systematically examine effect heterogeneity in a short time frame.