
Real Impact for Students
Enduring effects on student achievement are rare. But Leading Educators is proving that progress is possible with sophisticated and practical research.
When resources are tight and the stakes are so high, evidence-based work matters.
That’s why we are committed to using rigorous evaluation methods to understand how our interventions are shifting student outcomes. Within partnerships, we support partners to use timely and practical sources of evidence, including student work, observations, and exit tickets, to uncover evidence of learning and inform real-time assessments.
September 2024: Evidence of Lasting Improvement
A peer-reviewed study published in the Journal of Research on Educational Effectiveness in September 2024 reveals statistically significant positive increases in student ELA and math proficiency in three multi-city regions as a result of Leading Educators’ fellowship models.
The researchers found that a school’s participation in Leading Educators’ content-specific fellowship program increased student proficiency rates on both math and ELA state achievement exams, both during and after the program. The increases continued for two years after programming concluded.

Less than half of math and science PL programs included in a recent research synthesis showed positive impacts on teacher knowledge and practice, and only one-third showed positive impacts on student outcomes.”
The Every Student Succeeds Act (ESSA) emphasizes high standards for evidence so that schools and school systems can be sure they are investing in programs that actually make a difference.
We’re working to meet this bar with both innovative design and evaluation methods. Four rigorous studies supported by external research experts conclude that Leading Educators’ approach to PL has above-average positive effects on students’ ELA and math learning.
A RAND study of Leading Educators’ fellowship model found significant effects on math learning in Louisiana.
A RAND-supported study found significant effects after just one year of content-specific programming in Louisiana and Michigan.
A study supported by Dr. Matthew Steinberg found significant effects on student learning that endure up to two years after content-specific programming.
A randomized controlled trial led by RAND found significant effects on student math and ELA achievement in Chicago after one year.
Soon-to-be-released quasi-experimental studies of partnerships in Charleston County School District‘s turnaround schools and Harlem Community District 5 show positive effects on literacy.
Student Outcome Highlights
Across our research portfolio and evaluations of our existing partnerships, we see promising momentum toward greater student readiness for college and careers.

Students gain 6-11 months
of additional learning. On average, 18-40% of students are meeting grade-level standards at the beginning of a partnership.

73% of classes
classes were observed offering grade-level curricular materials, compared to just 26% in ELA and 42% in math nationally. Additionally, 70% of program participants increase expectations for students.

Students improve
their sense of belonging, relationships with teachers, and self-efficacy.
Highlight: Chicago Randomized Control Trial
Job-embedded, content-focused professional learning increased student learning in Chicago
A July 2022 study by RAND Corporation researchers Kata Mihaly, Isaac M. Opper, and Lucas Greer found that students attending schools randomly assigned to the Chicago Collaborative program had statistically significantly higher test scores than students attending schools randomly assigned to the control group.
These findings challenge the misconception that teacher professional development is ineffective and costly — the content and components matter.
Why We Use Effect Sizes
What is an effect size?
Often, student achievement data are reported as changes in proficiency. But because states have different cut-off scores for “proficiency,” it’s difficult to compare outcomes across geographic areas. Instead, all of our evaluations report results in effect sizes.
An effect size is a way to quantify the difference between two groups and determine the efficacy of an intervention. Because the outcomes of an intervention come in many different forms and scales, the effect is usually estimated in standard deviation which allows for comparison across different outcomes and studies.
Using an improvement index
While effect sizes make sense to researchers, they are often less familiar to practitioners. As researcher Robert Slavin writes, “Let’s say a given program had an effect size of +0.30 (or 30% of a standard deviation). Is that large? Small? Is the program worth doing or worth forgetting? There is no simple answer, because it depends on the quality of the study.”
We use an improvement index to address this challenge. The improvement index is the expected change in percentile rank for an average comparison group student if the student had received the intervention. This is a measure used by the What Works Clearinghouse to help readers understand the practical importance of an intervention’s effect. Baird & Pane (2019) compared several options to translate effect sizes and found that the translation to percentiles is the strongest method compared to years of learning, benchmarking (comparing against other estimated effects), and thresholds (likelihood that a student will attain some level of achievement).
Other organizations and the media often translate effects into “additional years of learning.” Be cautious about that! Here’s why:
- Conceptually misleading: The translation assumes that learning is linear and does not take into account that learning rates are highly dependent on student age and whether school is in session.
- Statistical uncertainty: Years of learning perform the worst in terms of statistical uncertainty when compared with other translations of effect sizes.
- Could produce unreasonable values: Highly implausible results are possible, such as many multiples of a year or negative values.
- Results depend on the method: There are many methods to estimate and each method could give substantially different results.
On the Horizon: Looking at Student Perspectives
This year, we’re working to provide a deeper look into student experience and outcomes beyond academics. We look forward to sharing insights from both our Teaching for Equity student survey and Panorama Student Success.

Stay in the know
Get timely insights direct to your inbox. Sign up for the latest news, tips, and opportunities from Leading Educators.