Cognitive Science as Pedagogy: Why teachers need to better understand the research

15 years ago I attended a training day with an educational charity dedicated to improving the life chances of pupils in our schools. The charity, which was affiliated with a Russell Group university, ran various sessions for schools aimed at improving teaching practice and delivering better results at GCSE and A Level. The session I participated in focused on child-centred learning strategies and – as was popular at the time – championed the idea that we should see ourselves as ‘facilitators’ as opposed to ‘teachers’. Obligatory reference was also made to ‘personal learning and thinking skills’ and ‘learning styles’. The day was an excellent example of the pedagogical paradigm we were working in at the time, which some now see as the zenith of ‘progressive education’, and has since been attacked as being built upon myths or unfounded assumptions.   

Cognitive science: a pedagogical paradigm shift

However, over the last 10 or so years we have witnessed a paradigm shift towards ‘evidence-informed practice’, which largely hinges on insights from cognitive science. As the Education Endowment Fund (EEF) states in its recent report, “Cognitive science is being used increasingly to inform interventions, practice, and policy in education. Of particular interest to education has been research into motivation and reward, working memory and long-term memory, and cognitive load” (2021, p. 5). Although this has introduced a whole host of relatively new concepts, such as ‘retrieval practice’, ‘spaced-distribution’ and ‘interleaving’, cognitive science also supports more traditional methods of teaching and learning like ‘direct instruction’ (Evers, 1998). These concepts have also been adopted by expert advisory groups and policy makers and play  a starring role in recent government initiatives, such as the Initial Teacher Training (ITT) Core Content Framework and the Early Career Framework. In other words, cognitive science as pedagogy is not only becoming our dominant pedagogical paradigm, but is now  quasi-official. 

The EEF report identifies two areas of cognitive science that have been especially influential. These are ‘cognitive psychology’, which is largely based on behavioural and observational research methods, and ‘cognitive neuroscience’, which is reliant on brain imaging technologies. Both of these sub-disciplines rely on complex research methodologies to justify their findings: it seems that these are often poorly understood by those advocating the conclusions of extremely nuanced studies as pedagogical silver bullets. For instance, the EFF’s report suggests, “Some approaches—like combining verbal explanations with graphical representations, also known as ‘dual coding’—are possible to implement poorly. While some studies show positive impacts on pupil outcomes, there are also multiple studies showing null or negative findings” (ibid., 7). This is very different to a blogger or INSET presenter telling teachers that dual coding is the next best thing. 

Importantly, the EEF itself cautions against the immediate whole-sale implementation of cognitive science: “The evidence for the application of cognitive science principles in everyday classroom conditions (applied cognitive science) is limited, with uncertainties and gaps about the applicability of specific principles across subjects and age ranges” (ibid, p. 7). The report, therefore, distinguishes between ‘basic cognitive science’ and ‘applied cognitive science’ as studies inform us of the former but have much less to say – conclusively at least – on the latter. Essentially, we know a lot about how we learn, but crafting activities, strategies and interventions to build on this knowledge is more difficult. 

Methodological limitations

Indeed, many methodological limitations exist that complicate the generalised conclusions made by cognitive scientists (although – it should be noted – that most acknowledge this), especially if we are to see their findings as an antidote to all of our pedagogical problems. Importantly, several key issues seemingly permeate throughout many (but not all) research studies. These issues are, I feel, important and are often ignored or not factored into the more popular championing of cognitive science as pedagogy; even if academic papers do reference them. They include:

  • very small experiment and control groups, which suggest the findings are neither representative of, nor particularly conducive to, larger groups of learners in different contexts
  • the differing demographics of experiment and control groups, which will significantly affect the transferability from, for example, adult learners to children and young people
  • complex control group comparisons, especially controlling for variables – including  teaching quality, school ethos and pupils’ prior attainment (essentially, can we be sure that it was the activity, strategy or intervention studied that had the most impact on learning, attainment and progress and none of the aforementioned?)
  • measuring how the activity, strategy or intervention centred on cognitive science impacts on attainment in relation to the amount of work set, completion of work and the duration of learning activities, which often depend on teacher, pupil or parent estimates (basically, can we be confident that  these measurements are accurate and trustworthy?) 
  • measuring how the impact of a learning activity, strategy or intervention developed from cognitive science is assessed, including the use of teacher tests, standardised tests and grade averages, especially as some of these are externally assessed whereas others are internally assessed (can we trust these test scores?)
  • the fact that pupils, in applicable research/experiments, may distort their answers to seem conscientious or to give the right answers (perhaps they are worried the wrong answers will get them into trouble)
  • in a similar vein to the above, teachers may distort their answers in order to avoid exposing perceived weaknesses or flaws in their teaching
  • and whether any records pertaining to the above points are accurate (how reliable is the researchers’ data and methodology?). 

Specific examples: retrieval practice, spaced-distribution and interleaving

In my own practice, I have saturated my lessons with strategies focused on retrieval practice, spaced-distribution and interleaving. I have also collaborated with colleagues from other local schools on how we can implement these strategies into our practice. Furthemore, I have even included chapters on them in a book that will be published in the Autumn on the benefits of homework. Nevertheless, even here there are research limitations we should consider. For example:

  • Many studies on retrieval practice focus on ‘relatively simple verbal materials, including word lists and paired associates’ (Dunlosky et al, 2013, p.32) and some cognitive scientists have questioned whether retrieval improves performance in complex tasks.
    • Van Gog and Sweller (2015) argue that ‘… the testing effect [often linked to retrieval practice] decreases as the complexity of learning materials increases … the effect may even disappear when the complexity of learning material is very high’ (p.247).
    • Rohrer et al. (2019) note ‘benefits of retrieval practice have yet to be demonstrated for mathematics tasks other than fact learning.’
  • One of the biggest issues of research on spacing is on whether spacing should be equally distributed or expanded (in that intervals expand over time). Although evidence exists for the latter, it is contested (see, for example, Balota, Duchek, & Logan (2007). 
    • Also, Karpicke & Bauernschmidt (2011) have suggested that more investigation needs to be carried out on how intervals are calculated, the the differences in procedures used and the longer term effects as many studies are short term. 
  • Jonathan Firth (2018) suggests teachers need to be weary of research as studies on interleaving may include limited sample sizes and bypass the diversity of learners.
    • He also says there are issues with establishing the extent to which the negative short-term classroom effects of interleaving and spacing, such as cognitive load, would be counteracted by improved long-term ability to remember and transfer learning.
    • Similarly, researchers should investigate whether benefits found in short term studies would apply over longer timescales. 

EEF report – to be welcomed (and hopefully read)

Although I feel some of the above issues have been bypassed by advocates of cognitive science as our modus operandi for the classroom, I believe the EEF report does add some caution to an otherwise promising pedagogical development. It should be seen as common sense that teachers applying ideas from cognitive science as pedagogy need to consider how, and in what contextual conditions, these ideas might improve the impact of teaching and learning (see Robert Coe’s blog on applying retrieval practice in the classroom, for instance).  

Care should be taken to ensure that activities, strategies or interventions based on cognitive science are successfully trialled before being implemented by teachers or schools on a wider scale. This is to avoid “‘…lethal mutations’ when a practice becomes disconnected from the theory” (EEF, 2021, p. 8). It would mean a more cautious approach to cognitive science as pedagogy than many are currently taking. We also need to be less reliant on heavily controlled small-scale studies and seek (or even request) larger scale studies in similar settings to our own. 

Of course, some ideas are overwhelmingly sound and universally accepted, such as retrieval practice, and I am convinced by others, such as interleaving (this last strategy works for me, but has not worked as well for some of my colleagues). However, it would be worthwhile investing a bit more time into researching ideas such as ‘dual coding’, ‘the generation effect’ and ‘desirable difficulties’ on a larger scale, and particularly in secondary schools, before they are adopted as the dominant pedagogical paradigm. Essentially, if we do not understand the limitations of the research we are adapting and applying, we might not get the impact we expect in the classroom. 

References

Balota, D. A., Duchek, J. M., & Logan, J. M. (2007). Is Expanded Retrieval Practice a Superior Form of Spaced Retrieval? A Critical Review of the Extant Literature. In J. S. Nairne (Ed.), The foundations of remembering: Essays in honor of Henry L. Roediger, III (pp. 83–105). Psychology Press.

Coe, R. (2019). EEF Blog: Does research on ‘retrieval practice’ translate into classroom practice?, EEF Blog (Online). Available at: https://educationendowmentfoundation.org.uk/news/does-research-on-retrieval-practice-translate-into-classroom-practice/ [Retrieved 23.04.20].

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58.

EEF (2021). Cognitive Science Approaches In The Classroom: A Review Of The Evidence. Available at: https://educationendowmentfoundation.org.uk/public/files/Publications/Cognitive_science_approaches_in_the_classroom_-_A_review_of_the_evidence.pdf

Evers, W. M. (1998). “How Progressive Education Gets it Wrong.” Hoover Digest,  (4). Available at: http://www.hoover.org/publications/hoover-digest/article/6408.

Firth, J. (2018). Spacing and Interleaving. Impact: The Journal of the Chartered College of Teaching, (2), 23-26.

Karpicke, J. D., & Bauernschmidt, A. (2011). Spaced retrieval: Absolute spacing enhances learning regardless of relative spacing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37(5), 1250–1257.

Rohrer, D., Dedrick, R. F., Hartwig, M. K., & Cheung, C. N. (2019). A randomized controlled trial of interleaved mathematics practice. Journal of Educational Psychology. Advance online publication. DOI: 10.1037/edu0000367.

Van Gog, T., & Sweller, J. (2015). Not new, but nearly forgotten: The testing effect decreases or even disappears as the complexity of learning materials increases. Educational Psychology Review, 27(2), 247–264.

Photo credit: Piqsels (used under a Creative Commons Licence)

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s