A classroom of students work in pairs on an assignment

Student Outcomes

Showing Real Results with Best-in-Class Evaluation

Enduring effects on student achievement are rare. But Leading Educators is proving that progress is possible.

We use rigorous evaluation methods to understand how students in LE schools are doing relative to similar students in other schools. That allows educators to track success from a longer vantage point and work toward common, measurable outcomes.

During the school year, our learning model encourages teachers to regularly use other timely evidence including quizzes, student work, and exit tickets to make adjustments in real time.

Leading Educators Leading Educators
“Professional learning (PL) can meaningfully improve teacher practice and student outcomes. However, few PL programs have been shown to have positive impacts at scale…
Less than half of math and science PL programs included in a recent research synthesis showed positive impacts on teacher knowledge and practice, and only one-third showed positive impacts on student outcomes.”
The Research Partnership for Professional Learning
We Have a Growing Record of Strong Student Impact

The Every Student Succeeds Act (ESSA) emphasizes high standards for evidence so that schools and school systems can be sure they are investing in programs that actually make a difference.

We’re working to meet this bar with both innovative design and evaluation methods. Four rigorous studies supported by external research experts conclude that Leading Educators’ approach to PL has above-average positive effects on students’ ELA and math learning.

2015: Louisiana Fellowship

A RAND study of Leading Educators’ fellowship model found significant effects on math learning in Louisiana.

2019: Louisiana and Michigan

A RAND-supported study found significant effects after just one year of content-specific programming in Louisiana and Michigan.

2022: Teacher Leadership Fellowships

A study supported by Dr. Matthew Steinberg found significant effects on student learning that endure up to two years after content-specific programming.

2022: Chicago

A randomized controlled trial led by RAND found significant effects on student math and ELA achievement in Chicago after one year.

2022: Long-Term Study

new quasi-experimental study of our work provides new proof for the power of teacher professional development, finding significant results for students.

This study used ten years of student data to mesaure the effects from teacher participation in Leading Educators programming. There were 529 schools in the comparison group and 29 schools in the treatment group for this study. Treatment schools had a higher proportion of students who identified as Hispanic and Black, students who were English language learners, and students who were identified as neurodiverse learners.

white line drawing of student raising hand at desk

Students

in schools where teachers participated in Leading Educators programming made statistically significant improvements in math and ELA proficiency that considerably exceeded the average effect size for elementary and middle school interventions.

Plus, minus, multiplication, and equal signs

28% increase in Math Achievement

over the 4-year period in the percentage of students proficient or advanced (8.5 percentage points).

White icon of a book

17% increase in ELA Achievement

over the 4-year period in the percentage of students proficient or advanced (5.3 percentage points). The effect was significant at the 10% level.

2022: Promising Evidence from Chicago

Chicago Collaborative Impact on Student Achievement

A new randomized control trial by RAND Corporation shows that educators significantly increased student achievement after participating in Leading Educators’ Chicago-based PD program. These findings challenge the misconception that teacher professional development is ineffective and costly — the content and components matter.

Learn More

Why We Use Effect Sizes

What is an effect size?

Often, student achievement data are reported as changes in proficiency. But because states have different cut-off scores for “proficiency,” it’s difficult to compare outcomes across geographic areas. Instead, all of our evaluations report results in effect sizes.

An effect size is a way to quantify the difference between two groups and determine the efficacy of an intervention. Because the outcomes of an intervention come in many different forms and scales, the effect is usually estimated in standard deviation which allows for comparison across different outcomes and studies.

Using an improvement index

While effect sizes make sense to researchers, they are often less familiar to practitioners. As researcher Robert Slavin writes, “Let’s say a given program had an effect size of +0.30 (or 30% of a standard deviation). Is that large? Small? Is the program worth doing or worth forgetting? There is no simple answer, because it depends on the quality of the study.”

We use an improvement index to address this challenge. The improvement index is the expected change in percentile rank for an average comparison group student if the student had received the intervention. This is a measure used by the What Works Clearinghouse to help readers understand the practical importance of an intervention’s effect. Baird & Pane (2019) compared several options to translate effect sizes and found that the translation to percentiles is the strongest method compared to years of learning, benchmarking (comparing against other estimated effects), and thresholds (likelihood that a student will attain some level of achievement).

Other organizations and the media often translate effects into “additional years of learning.” Be cautious about that! Here’s why:

  • Conceptually misleading: The translation assumes that learning is linear and does not take into account that learning rates are highly dependent on student age and whether school is in session.
  • Statistical uncertainty: Years of learning perform the worst in terms of statistical uncertainty when compared with other translations of effect sizes.
  • Could produce unreasonable values: Highly implausible results are possible, such as many multiples of a year or negative values.
  • Results depend on the method: There are many methods to estimate and each method could give substantially different results.

On the Horizon: Looking at Student Perspectives

This year, we’re working to provide a deeper look into student experience and outcomes beyond academics. We look forward to sharing insights from both our Teaching for Equity student survey and Panorama Student Success.

a middle school girl explains a math problem to her teacher
How can we help you?
Send us a quick note about your challenges or ideas, and we'll be in touch!
SAy HELLO
Send us a Message
Your name
Email Address
Phone
Company
Contact Information
Message
182 Ave – Glendale, NY 10285, US
1 (800) 921 89 15
Send Message
If you are interested or have any questions, send us a message.
Get our free ebook!
How to get more sales
Download Now