The following reflections are intended as exploratory questions rather than definitive assertions. I am not implying any impropriety on the part of awarding bodies (‘exam boards’) or individual examiners. Nonetheless, my recent experiences of teaching GCSE Religious Studies and A-level Sociology have highlighted several potential dilemmas that I wish to articulate.
Teachers frequently confront the tension between delivering intellectually rich lessons and preparing pupils effectively for examinations. Whilst examinations are intended to reward knowledge and skills, many syllabuses are written in deliberately broad and vague terms, with ‘indicative content’ rather than precise coverage. This creates a structural danger: teachers may over-teach—covering more detail, breadth or recent material than examiners anticipate—only for pupils’ knowledge to be misunderstood, under-credited or ignored. The problem lies less with teachers’ ambition and more with the limitations of exam boards and examiners.
The first issue is the lack of clarity in syllabuses. Many exam boards publish skeletal specifications with vague topic outlines, leaving teachers to interpret what constitutes examinable material. This ambiguity forces teachers to ‘hedge their bets’, often by teaching more content than is strictly necessary. Ofqual (2015) has acknowledged that specifications can lack precision, leading to inconsistencies between what teachers cover and what examiners reward. In this sense, over-teaching is a rational response to systemic uncertainty, rather than a pedagogical failing.
This is especially visible in A-level Sociology, where theoretical debates can be taught in immense depth. For example, discussion of the structure–agency debate might be enriched by introducing the critiques of Anthony Giddens offered by realist thinkers such as Roy Bhaskar ([1979] 2014) and Margaret Archer (1995). Yet examiners may not recognise these contributions, instead expecting only the most familiar textbook references. Moreover, many sociology papers are behind pay walls or the points linked to topics or authors are embedded in text that will not necessarily arise in a simple keyword search. Furthermore, examiners have a lot of work to do in short time-spaces. Can they really research every word, concept, study or name they are unsure about?
Similarly, insisting on scholarly nuance—that Michel Foucault never identified as a postmodernist (he preferred the term poststructuralist), or that Pierre Bourdieu resisted labels such as ‘Marxist’, preferring to see himself as ‘beyond labels’—does not necessarily help students. Textbooks often simplify, suggesting that Foucault is postmodern and Bourdieu Marxist. Students who reflect the more nuanced position may find their responses marked down simply for diverging from the conventional teaching materials most examiners rely upon.
A second danger stems from examiner unfamiliarity with specialist or minority perspectives. This is not confined to Sociology. In GCSE Religious Studies, for example, Shia pupils often feel disadvantaged because textbooks overwhelmingly reflect Sunni theology and practice. When pupils draw upon Shia traditions or personal expressions of spirituality, these may not align neatly with the examples examiners expect, and risk being undervalued. In both sociology and religious studies, I wonder if depth, authenticity and diversity of perspective can become liabilities in a system that prizes predictability.
Thirdly, the training and consistency of examiners compounds the issue. Many examiners are casual or seasonal appointments, often with limited time for professional development. Whilst a dated study, Shorrocks-Taylor and Jenkins (2000) note that examiner marking can lack consistency, particularly when responses involve atypical material. If exam boards do not ensure robust examiner training to recognise legitimate but less common approaches, pupils are disadvantaged precisely for engaging with their subjects at a deeper or more authentic level. I would not, therefore, want to be seen as overly critical of examiners, but I do question if the system allows for examiners to be experts in all that they assess, particularly in very broad subjects.
The consequences for pupils are significant. Over-teaching driven by exam board vagueness risks cognitive overload (Sweller, 1994), leaving pupils uncertain about what is ‘safe’ to write. At the same time, reliance on examiners’ familiarity creates a hidden curriculum: success depends less on mastery of knowledge than on second-guessing what examiners know or expect. This dynamic devalues both teaching and learning, as teachers must strategically withhold nuance or alternative traditions that may not ‘fit’ the exam.
Critically, this situation raises questions about the function of public examinations. If examinations punish pupils for demonstrating knowledge beyond the narrow band of examiner familiarity, they fail in their stated purpose of measuring achievement fairly. Instead, they incentivise superficial learning aligned with examiners’ expectations rather than with intellectual development or disciplinary authenticity.
In conclusion, the dangers of over-teaching lie not in teachers’ ambition but in the lack of clarity and consistency within exam systems. Vague syllabuses, limited examiner training and the reliance on predictable content create a culture where teaching ‘too much’ becomes a liability. Until exam boards provide clearer specifications and equip examiners to credit a wider range of legitimate responses—including nuanced theoretical positions in sociology and diverse theological perspectives in religious studies—pupils will remain at risk of being disadvantaged by the very qualities—depth, curiosity, originality—that education ought to cultivate.
References
- Archer, M. S. (1995). Realist social theory: The morphogenetic approach. Cambridge University Press.
- Bhaskar, R. (2014). The possibility of naturalism: A philosophical critique of the contemporary human sciences (4th ed.). Routledge
- Meadows, M., & Billington, L. (2005). A review of the literature on marking reliability. National Assessment Agency.
- Ofqual. (2015). Marking consistency metrics: An update. Office of Qualifications and Examinations Regulation.
- Shorrocks-Taylor, D., & Jenkins, T. (2000). Learning from examinations. Kluwer Academic.
- Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312. https://doi.org/10.1016/0959-4752(94)90003-5
