Student Outcomes
Enduring effects on student achievement are rare. But Leading Educators is proving that progress is possible.
We use rigorous evaluation methods to understand how students in LE schools are doing relative to similar students in other schools. That allows educators to track success from a longer vantage point and work toward common, measurable outcomes.
During the school year, our learning model encourages teachers to regularly use other timely evidence including quizzes, student work, and exit tickets to make adjustments in real time.
Less than half of math and science PL programs included in a recent research synthesis showed positive impacts on teacher knowledge and practice, and only one-third showed positive impacts on student outcomes.”
The Every Student Succeeds Act (ESSA) emphasizes high standards for evidence so that schools and school systems can be sure they are investing in programs that actually make a difference.
We’re working to meet this bar with both innovative design and evaluation methods. Four rigorous studies supported by external research experts conclude that Leading Educators’ approach to PL has above-average positive effects on students’ ELA and math learning.
A RAND study of Leading Educators’ fellowship model found significant effects on math learning in Louisiana.
A RAND-supported study found significant effects after just one year of content-specific programming in Louisiana and Michigan.
A study supported by Dr. Matthew Steinberg found significant effects on student learning that endure up to two years after content-specific programming.
A randomized controlled trial led by RAND found significant effects on student math and ELA achievement in Chicago after one year.
2022: Long-Term Study
A new quasi-experimental study of our work provides new proof for the power of teacher professional development, finding significant results for students.
This study used ten years of student data to mesaure the effects from teacher participation in Leading Educators programming. There were 529 schools in the comparison group and 29 schools in the treatment group for this study. Treatment schools had a higher proportion of students who identified as Hispanic and Black, students who were English language learners, and students who were identified as neurodiverse learners.
Students
in schools where teachers participated in Leading Educators programming made statistically significant improvements in math and ELA proficiency that considerably exceeded the average effect size for elementary and middle school interventions.
28% increase in Math Achievement
over the 4-year period in the percentage of students proficient or advanced (8.5 percentage points).
17% increase in ELA Achievement
over the 4-year period in the percentage of students proficient or advanced (5.3 percentage points). The effect was significant at the 10% level.
2022: Promising Evidence from Chicago
Chicago Collaborative Impact on Student Achievement
A new randomized control trial by RAND Corporation shows that educators significantly increased student achievement after participating in Leading Educators’ Chicago-based PD program. These findings challenge the misconception that teacher professional development is ineffective and costly — the content and components matter.
Why We Use Effect Sizes
What is an effect size?
Often, student achievement data are reported as changes in proficiency. But because states have different cut-off scores for “proficiency,” it’s difficult to compare outcomes across geographic areas. Instead, all of our evaluations report results in effect sizes.
An effect size is a way to quantify the difference between two groups and determine the efficacy of an intervention. Because the outcomes of an intervention come in many different forms and scales, the effect is usually estimated in standard deviation which allows for comparison across different outcomes and studies.
Using an improvement index
While effect sizes make sense to researchers, they are often less familiar to practitioners. As researcher Robert Slavin writes, “Let’s say a given program had an effect size of +0.30 (or 30% of a standard deviation). Is that large? Small? Is the program worth doing or worth forgetting? There is no simple answer, because it depends on the quality of the study.”
We use an improvement index to address this challenge. The improvement index is the expected change in percentile rank for an average comparison group student if the student had received the intervention. This is a measure used by the What Works Clearinghouse to help readers understand the practical importance of an intervention’s effect. Baird & Pane (2019) compared several options to translate effect sizes and found that the translation to percentiles is the strongest method compared to years of learning, benchmarking (comparing against other estimated effects), and thresholds (likelihood that a student will attain some level of achievement).
Other organizations and the media often translate effects into “additional years of learning.” Be cautious about that! Here’s why:
- Conceptually misleading: The translation assumes that learning is linear and does not take into account that learning rates are highly dependent on student age and whether school is in session.
- Statistical uncertainty: Years of learning perform the worst in terms of statistical uncertainty when compared with other translations of effect sizes.
- Could produce unreasonable values: Highly implausible results are possible, such as many multiples of a year or negative values.
- Results depend on the method: There are many methods to estimate and each method could give substantially different results.
On the Horizon: Looking at Student Perspectives
This year, we’re working to provide a deeper look into student experience and outcomes beyond academics. We look forward to sharing insights from both our Teaching for Equity student survey and Panorama Student Success.
Stay in the know
Get timely insights direct to your inbox. Sign up for the latest news, tips, and opportunities from Leading Educators.