Does More Learning Happen When Students are in the Driver’s Seat? – Education Rickshaw

Does More Learning Happen When Students are in the Driver’s Seat? – Education Rickshaw

Most teachers will be familiar with Khan Academy, or similar learning programs, that offer a mixture of 1) problems to solve and 2) instructional supports that students can use to learn how to solve the problems. Common instructional supports in online learning environments include partial hints (e.g., click here for a hint to get you started), instructional videos (such as those recorded by Sal Khan of Khan Academy), and fully worked examples (step-by-step instructions for how to solve a class of problems).

Many online programs allow students to control their use and access to instructional supports. Students may choose to watch the video, access the first step of the hint, or advance through a series of progressive hints until they “bottom out” with the last hint, which reveals a fully worked example. They may also choose to skip all available hints, videos, and examples in order to focus on solving problems. An implicit assumption in designing online programs that allow students to control, or self-regulate, their use of instructional supports is that the learner is the agent best positioned to determine the level of support they require (Merrill, 1975).

There are many issues with this assumption. We know that the least knowledgeable people in a domain tend to be overconfident in their abilities, for one cannot know what one doesn’t know. This is the infamous Dunning-Kruger effect (Kruger & Dunning, 1999) and it is an explanation for why students tend to be poor managers of their learning (Dunlosky & Rawson, 2012; Kirschner & van Merriënboer, 2013). Overconfidence often leads to students making suboptimal choices during self-guided lessons, such as simply ignoring available instructional supports and help channels (Aleven et al., 2016; Foster et al., 2018). This would be okay if instruction that involved only solving problems were just as effective, or superior, to instruction that begins with teaching students how to solve problems. Decades of research on discovery learning tells us this isn’t the case (Clark, Kirschner, & Sweller, 2012; Mayer, 2004; Sweller, 2021).

If we look towards the research on teaching problem solving, we find that instruction must adhere to specific principles for learning to be successful. The table below from van Harsel et al. (2021) outlines the main ones:

Read the full open-access article, here

If the literature points in a clear direction of the best ways to sequence and interleave instructional supports, such as examples, with problem solving, we have to ask ourselves whether it is wise to shift the instructional locus of control onto the learner, most of whom are novices in the domain we’re introducing, and are prone to exhibiting overconfidence about their abilities during the initial stages of skills acquisition. Not to mention, students aren’t experts in the science of instruction and thus aren’t privy to the information in the above table unless we explicitly teach it to them in advance of any instruction involving learner control of examples and problems.

Many of these questions will be addressed in my upcoming computer-based research with approximately 200 school-aged participants. Students will be given the option to solve problems or study worked examples while learning math. If novices (those who scored quite low on the pre-test) overwhelmingly select examples to study from the outset, leading to lower levels of cognitive load and higher performance on the post-test compared to the externally (i.e., teacher) controlled condition, we have evidence that learners can be trusted to manage their own learning when given agency over bypassing or accessing instructional guidance. We will even see if learners who are informed during instruction about some of the learning principles in the above table end up outperforming learners who were not informed about the learning principles.

I’m excited to see what comes of this.

– Zach Groshell


Aleven, V., Roll, I., McLaren, B. M., & Koedinger, K. R. (2016). Help helps, but only so much: Research on help seeking with intelligent tutoring systems. International Journal of Artificial Intelligence in Education, 26(1), 205–223.

Clark, R. E., Kirschner, P. A., & Sweller, J. (2012). Putting students on the path to learning: The case for fully guided instruction. January 2012, 6–11.

Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces underachievement: Inaccurate self evaluations undermine students’ learning and retention. Learning and Instruction, 22(4), 271–280.

Foster, N. L., Rawson, K. A., & Dunlosky, J. (2018). Self-regulated learning of principle-based concepts: Do students prefer worked examples, faded examples, or problem solving? Learning and Instruction, 55, 124–138.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? urban legends in education. Educational Psychologist, 48(3), 169–183.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.

Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? American Psychologist, 59(1), 14–19.

Merrill, M. D. (1975). Learner control: beyond aptitude-treatment interactions. AV Communication Review, 23(2), 217–226.

Sweller, J. (2021). Why Inquiry-based Approaches Harm Students’ Learning. Analysis Paper (Centre for Independent Studies), 24(August), 15.

van Harsel, M., Hoogerheide, V., Verkoeijen, P., & Gog, T. (2021). Instructing students on effective sequences of examples and problems: Does self‐regulated learning improve from knowing what works and why? Journal of Computer Assisted Learning, June, 1–21.

Leave a Reply