Using Data can Tell a More Complete Story
- Josephine Akinwumiju
- Aug 4, 2025
- 3 min read
Updated: Aug 7, 2025

Over the last two months I have explored the many ways assessments can be used in education and have questioned whether assessments are still necessary and what they should look like if we as a society continue to use them. Assessments are multi-faceted tools and can take many forms and involve a combination of strategies. One of my recent assignments was to design assessments using randomly assigned criteria to test our understanding of the content, and also to push us to think creatively and approach assessment design in new ways.
For some learners and situations, assessments provide a summative snapshot of their experiences. For others, assessments serve as a checkpoint to revise or redirect their learning in formative ways. In many institutions, assessments are used as quantifiable metrics to move learners through the system.
Regardless of the format, understanding assessment lies in how the results are evaluated. Assessment data helps determine whether learning outcomes were met, whether the curriculum was designed to support those outcomes, and whether learners were able to demonstrate mastery in the way they were asked. It often informs high-stakes decisions: Did the student score high enough to pass, to move on, or to be granted admission?
Assessment data can be classified into two main types: qualitative and quantitative. Qualitative data includes information such as findings from focus groups, observational notes, or open-ended survey responses. Quantitative data includes numerical measures such as rubric scores, standardized test results, etc. If we focus only on qualitative data, we risk missing critical insights from the quantitative side, such as patterns in scoring across demographic groups. If we only focus on the quantitative data, we risk missing insights from the qualitative side, such as how variations in scoring might be influenced by factors like the political climate or specific events occurring on the day of the assessment. Thus, both types of data are important and should be used together to gain a deeper understanding of what is truly happening in the learning process.
When designing curriculum, we do so with intention. We consider diverse learners, apply culturally responsive pedagogy, and leverage design frameworks, such as Universal Design for Learning (Cast, 2024). Similarly, when building assessments, we account for confirmation bias, utilize appropriate technologies, and balance formative and summative formats.
Therefore, when interpreting results, it makes sense to move beyond surface-level analysis. If both curriculum and assessments are designed to be inclusive, then our evaluation of those efforts must reflect the same level of care. We should examine not just whether students understood the material, but who did, why they did, and what that tells us about how to better support every learner moving forward. This is where disaggregated data becomes essential. Disaggregated data examines not just the results, but where those results come from breaking down outcomes by race, gender, socioeconomic status, and more (Montenegro & Jankowski, 2017). Analyzing assessment data through these lenses adds an additional layer of insight and can help inform decisions about what to adjust or improve moving forward.
For a recent assessment I created, I needed to account for gender when analyzing the data. The assessment was reading-based, and students were tasked with evaluating whether Newsela’s lexicon-level adjustment feature truly supported students of varying reading abilities in understanding the material, comprehending the content, and accurately answering questions. While the structure and content of the assessment were important, it was equally valuable to examine whether there was a correlation between gender identity and reading level performance. For example, did students who identify as female demonstrate stronger comprehension at higher lexicon levels than students who identify as male, or vice versa? This is a particularly important question given the presence of long-standing societal biases that suggest women are inherently better at reading than men. If we do not analyze the data with this context in mind, we risk reinforcing or overlooking trends that may either support or challenge these assumptions. Disaggregating the results allows us to move beyond generalized conclusions and instead make data-informed decisions that promote equity and accuracy in both assessment and instruction.
Author’s Note:
This blog post was edited with support from ChatGPT.
References:
Center for Applied Special Technology (CAST). (n.d.). UDL guidelines. CAST. https://udlguidelines.cast.org/
Montenegro, E., & Jankowski, N. A. (2017). Equity and assessment: Moving towards culturally responsive assessment. (Occasional Paper No. 29). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
Someecards. (2012). Let's play school! I'll pretend to be a teacher analyzing disaggregated student data... [E-card]. https://www.someecards.com/usercards/viewcard/MjAxMi02ZDYwMmZjODk1MDRiZTk0/

Comments