Uncategorized

Iterative Improvement—A Commitment through Research

Iterative Improvement—A Commitment through Research

Oftentimes, research papers aim to be groundbreaking or to show significant positive findings. While these studies often push forward what is known in the learning sciences, these are not the only types of research studies that matter. It is also beneficial to share what did not work and learn from those examples.   

In July of 2021, we presented a paper on three courses in which the adaptive activities in courseware did not have the benefits to students that were previously found. In these three non-STEM courses, the adaptive activities had a net negative impact on learning estimates. The goal of the paper was to investigate why. We found that these courses all had far fewer total adaptive questions than successful courses, had very low ratios of scaffolding questions to hard questions, and often had scaffolded questions all written at the same difficulty level as the hard questions. From this investigation, we determined a set of best practices for creating adaptive activities designed for student success.  

Data analysis using natural student data is key because data will always be able to show when a feature is not optimized to benefit students. In this paper, we found evidence that even courseware written by subject matter experts and designed based on learning science principles can be imperfect. Data illuminates those imperfections and their solutions. In this case, the data revealed new insights into how to better design adaptive scaffolding. It also revealed human fallibility. Questions written to low and medium difficulties often were just as difficult for students as the hard questions. This was not intentionally done; question difficulty is hard to gauge, and subject matter experts may overestimate the abilities of novices, especially those who are struggling. Data analysis can reveal when the intention of questions does not align with reality.  

This process of analyzing data, discovering subpar results, investigating the circumstances for those results, and identifying solutions is part of the learning engineering process we use every day in Research and Development. Carrying forward our origins at Carnegie Mellon University’s Open Learning Initiative, the learning engineering process provides the structure to engage in the practice of iterative improvement in a meaningful way. Investigating subpar results and learning how to improve them is an act of learning engineering aimed at improving the learning experience for students. 

Sharing research findings—even when they aren’t optimal—is part of an ethical commitment to transparency and accountability to the broader research and educational community. Positive research findings are fantastic to provide validity and support for learning methods, but negative research findings are also beneficial to investigate and learn from what did not work. While we strive for effectiveness, we also are committed to learning from the data and making improvements because, at the end of the day, we must always do what is best for the learner.

Leave a Reply

Your email address will not be published. Required fields are marked *