Bringing a Sharper Lens to Data Reviews: Part 2
Data reviews can serve multiple purposes and may be structured in a multitude of ways. To match the structure and format to the purpose, get clear about what you want to accomplish, how you want participants to show up, and how you plan to use the time.
How to use your time
When planning for your time, continue anticipating the risks that implicit expectations pose to data reviews. For better or for worse, our brains are constantly scanning and assessing our environment, including the people in it. This automatic process produces judgments that are often useful as they enable us to make predictions about what will happen and to have expectations. They can also be harmful, however, when they rely on negative stereotypes that exist within our society.
Such is the case with implicit bias. Implicit bias is “a form of bias that occurs automatically and unintentionally, that nevertheless affects judgments, decisions, and behaviors”. When unchecked, they can create harm for anyone whether Black, brown, or white. To ensure a fair process and outcomes, consider the following steps.
Speak your intentions
Before diving into the data, reserve a few minutes to explicitly state your intentions. Be sure to give voice to what you hope to learn from the data, the goals you have for the time, and the commitments you are making in analyzing the data. Affirm your intention to engage in a fair data review process. Invite others to help you by recognizing and challenging assumptions, sharing alternative views or perspectives, engaging in productive disagreement, and identifying questions that will require additional follow-up.
Discuss expectations upfront
Implicit bias is insidious, in part because it can be so difficult to recognize and acknowledge. One approach to recognizing and minimizing the impact of fixed ideas during data reviews is to list the hypotheses you have about what you expect to find in a dataset before you begin reviewing it. Then, after reviewing the data, ask yourself: where were my expectations met, where were they not, what role did my assumptions play in making me “see” particular trends in the data, and what were my assumptions preventing me from seeing in these data?
Another similar strategy is to remove or hide the group labels in charts or tables and then ask reviewers to guess which outcomes belong to which groups. Reflecting afterward on where your assumptions were correct or incorrect can be another great way to recognize where expectations may have shown up. Using approaches like these to make explicit the expectations we have allows us to better consider what role our expectations play in what we perceive and believe about the data.
Disaggregate by groups
Knowing the average score on a specific measure or the modal response to a given survey item does tell us something about an overall sample or population, but it says virtually nothing about the different groups or categories of respondents within it. In order to understand whether there are meaningful differences between the various groups in a given sample, we must disaggregate our data by those same groups and consider the patterns across them.
Develop a habit of consistently disaggregating your data by: race/ethnicity, gender identity, socioeconomic status, learner status, language status, and role. Also consider other dimensions of identity where inequities could be lurking, such as around sexual orientation or immigration status, even when you may not have access to those identifiers.
Just as focusing on overall averages can mask important group or category differences, so too can examining only a single aspect of identity. Although the combined impact of multiple demographics can be difficult to disentangle, it is still important to consider and attempt to uncover such intersectional impacts. One approach is to conduct a crosstabs comparison to explore whether a pattern of differences on one identifier holds true across the various levels or groups that live within another identifier.
For instance, where there are meaningful differences among students of varying socioeconomic statuses, consider whether that pattern is consistent – or if it differs – across various racial/ ethnic groups or geographic communities. The goal is to understand the combined impact that two or more identities or experiences may be having on students’ outcomes.
Disrupt deficit thinking
Deficit thinking centers culturally dominant groups or students as being “normal” or “standard” and consequently positions other groups or students as being deficient or divergent from that norm. And yet, all students can achieve their fullest potential with the right set of supports. Recognize language like “these students just can’t do it” as deficit thinking and name it for others. One of the most powerful ways to counteract thoughts like these is to identify and reflect on counterexamples, which can effectively disrupt deficit thinking.
See the systems and supports
Outcome data alone rarely make clear the specific factors or circumstances that caused them. As a result, we often make logical leaps or assumptions when tracing an outcome back to its underlying root cause(s). Too often in education that involves viewing our students or their circumstances as the main cause.
Try shifting your focus and re-center your interpretation on the system or conditions under which the data were generated. Consider how the system (e.g., school) or set of supports (e.g., interventions) hiding behind the data contributed to the pattern of results you observe. For example, rather than asking, “Why did this group of students not do as well in my class as this other group?,” a teacher could ask, “How effective was my instruction at closing the opportunity gap?”
As humans, our brains recognize and attend to negative information or challenges more readily than positive stimuli or successes. As a result, we can easily become fixated on challenges or failures and overlook important wins. Even so, there is always a bright spot within any dataset.
Consistently identify and celebrate these wins or positive trends. Commit to not ending a data review until you have found a win. Celebrating success, no matter how small, can not only inspire us, but also serve as a foundation for overcoming new challenges that might otherwise feel overwhelming.
After data reviews
You have just finished your review. Mission accomplished, right? Not just yet! Equitable processes require effective communication, transparency, and accountability. Close out a strong data process by following these steps after your reviews.
Share with others
Share the findings of your data review with other stakeholders. Include the people from or about whom the data were gathered (e.g., students). Invite them to offer their perspective on your findings. What do they think about your interpretations and the action steps you identified for moving forward? Establish a timeline for gathering new data, conducting future analyses, and reporting back on those results.
Provide history and context
When sharing findings, be sure to highlight where the inequities live as well as where they were not found. When discussing meaningful demographic differences, contextualize the findings within the relevant history and background. Highlight the important social, structural, and historical factors that contribute to the pattern of results you observed.
Working toward opportunity for all students is an ongoing effort, so continue to develop your team. You will need to foster a culture where peers are willing to ask tough questions, have uncomfortable conversations, and consider divergent perspectives. Doing these things can be hard! In groups, our ability to engage in these kinds of challenging activities rests on the strength of relationships. Therefore, between data reviews, find opportunities to strengthen the trust, care, and communication among your team. Stronger teams generate deeper insights, which lead to more effective actions.
- Knips, A. (2019, June 13). 6 steps to equitable data analysis. Edutopia.
- The Education Trust-West. (n.d.) Data equity walk toolkit.
- Urban Institute. (n.d.). Resources to elevate data for equity.
Tim Tasker is a proud data nerd with a passion for making evidence accessible to all stakeholders. As Director of Data & Evaluation, he supports program teams to develop customized evaluation strategies for their local contexts. He also designs tools for evaluating the school- and system-level conditions that enable effective teacher professional learning.
Tim holds a B.A. in Psychology from Northwestern University. He later earned his Ph.D. in Community and Prevention Research from the University of Illinois at Chicago. He is a current Fellow with the Strategic Data Project at the Center for Education Policy Research at Harvard University.