Normal service is resumed? Assessment in FE contexts - Mary Richardson
14 July 2022
Next month, there will be a collective holding of breath as we wait to see what the exam results reveal for this unique cohort of students who did not take their GCSEs in 2020 and so had to navigate their Further Education (FE) learning and assessment with less preparation than any of their previous peers. This is going to be a tense time for policymakers, Ofqual, the exam boards, college and school leaders, teachers and, of course, students and their families. All eyes will be on the A-level results, but there are of course a range of other post-16 educational qualifications which are equally important. A brief glance over news media articles about national exams since 2020 reveals uncertainty about how exam boards, schools and teachers resume so-called “normal” service, particularly given that we are adapting to “new normal” in every aspect of our lives.
My research focuses on the evolution of beliefs about assessment, specifically testing, and I believe that how we discuss these topics in public settings is critical to understanding their purpose and value to describe an individual’s experience of schooling and what it means to be educated. I had such concerns well before the pandemic began and it was in 2013 that I experienced something of an epiphany whilst on a research trip in Finland. On the final day, we took a trip to Santa’s village. At the ‘Post Office’ I read some of the files of letters from children and found this shocking note from England [paraphrased]:
Dear Santa,
For Christmas I’d like 10 A stars in my GCSEs. If I fail, I will let everyone down. I try hard at school but don’t always get the grades I want. Please help. Love … xxx
The letter raises many issues, but importantly it underlines the immense pressure resulting from the “all or nothing” emphasis that underpins views of high stakes assessments. The letter suggests an individual at breaking point, a child with unrealistic expectations and a national testing system that causes extreme anxiety which leads children to write to fictional figures. How did we get here?
All FE students complete some form of assessment that leads to a qualification for work or further study, but there are doubts about the extent to which these are both useful and relevant in their present forms. Such concerns are practical and philosophical in nature and raise important questions, three of which will be discussed here:
a. How are the assessments used in FE settings (colleges and schools) perceived?
b. What influence has the pandemic had on national assessment policies in England/
c. Are there any alternatives to the current system?
It’s important to foreground any discussion of assessment by considering what it means to those who experience it. This is because it is too easy to assume that I (as a teacher/researcher), know what I mean without considering that a 16, 17, or 18-year-old will have a very different view of the same thing. Therefore, taking the time to understand how a variety of assessment stakeholders perceive their experience provides opportunities to review policy and practice based on real evidence. The term stakeholders is perhaps corporate in nature, but it fits with the education system we have in England: it reflects the way in which education and its results are managed and perceived. Figure 1 outlines the stakeholders and their relationships and whilst the students appear to be at the heart of the ecosystem, the influence of policymakers is what drives the structure, content and focus. Decisions are made centrally and guided by not only educational values or aims, but by political ones too.
Figure 1: Stakeholders in educational assessment
The way in which assessment is used, viewed and managed across all phases of education is complex, often misunderstood and has led to a range of ideas that could encapsulate what is now “normal” practice.
Covid-19
In England, the cancellation of exams was followed by the recognition that an algorithm used to moderate the data provided by schools had reduced final grades for many thousands of students. The use of algorithms to model national data sets is a normal part of the awarding processes so that a national standard can be maintained. However, with the key piece of data, exam results missing from the decision making processes, the reduction of grades resulted in a genuine crisis in public trust in national tests. Not only did many students miss out on university places due to the reduction in their expected grades, but the spectre of disadvantage also loomed large with students from the poorest backgrounds most likely to lose out.
However, the crisis caused by the algorithm also presented an opportunity: to take an honest look at national assessment systems and admit that exams are not always the best way to summarise and appraise what it is that students know and can do. Placing confidence on a single measure of attainment led to the distressing events of 2020 and the fallout continues to impact testing outcomes – 2022 will be the most difficult yet as a new standard will need to be set. Given the immense challenge this represents, perhaps we could or should expect a reconsideration of how we assess educational achievement in England.
To unpack the complexity of assessment and in an attempt to make some sense of its core components, what follows includes history, a consideration of some issues the pandemic has revealed and concluding thoughts on how we might, as an education community, revisit what a normal assessment landscape (in FE contexts) can be from 2022 onwards.
Understanding assessment
The global development and hunger for comparative achievement in educational assessment has evolved rapidly since the 1990s with countries embracing tests such as PISA, TIMSS and PIRLS: the International Large-Scale Assessments (International Association for the Evaluation of Educational Achievement, 2017). This enthusiasm for measuring whole systems against one another has transformed our opinions about the aims and purpose of education, and the important consequence of this are the ways it has influenced general perceptions of educational assessment.
Assessment in education is now dominated by global, national and local cultures driven by a need to compete and this has reduced the view of assessment to getting the right grades. This notion has led to an acceptance that GCSE and post-16 qualifications are termed ‘high-stakes’ tests: an apt definition because their results carve the very shape of the students who take them. But it doesn’t just stop with the students, the results of such assessments also frame the value, kudos and quality of teaching, of teachers and the institutions in which they work. In FE contexts, test results of this kind guide the career paths of students; they act as gatekeepers for access to higher education, employment opportunities, certain institutions and most importantly, we know they influence individual socio-economic prospects. The research in all of these areas does not offer positive findings; those who do less well in high-stakes tests are generally students from deprived backgrounds and the cycle of deprivation is hard to beat if you are unable to make the necessary grades. It is therefore curious that we do little to change this cycle. In fact, we seem addicted to high stakes testing to the point that, with no substantive evidence, we accept claims that exams generally are “fairer” or “more rigorous” than any other type of assessment. Our commitment to testing, particularly examinations, as educational assessment is totally ingrained within our educational culture. It seems almost impossible to believe that exams are not the only way to they are not the only way to demonstrate what a student knows/can do following a course of FE study.
Given that the outcomes of educational assessment are so important, then how we discuss them should be of utmost importance too. However, a simple scan of news sites around the time of results publication, a read of educational forums, social media etc all reveal examples of how the structure, design, application and outcomes are misconstrued and distorted. Such writing is not (usually) published with spiteful intent, but it demonstrates that simplistic explanations of the reality of testing ignore the complex nature of educational assessment. A good example here is validity: valid assessment and validity relate to the inferences claimed from test scores - it is never a characteristic of a test itself (see Daniel Koretz for the best explanations of this). Tests can be fair, or reliable, or even trustworthy but any validity associated with them relates solely to the consequences of the results. In England, we are faced with a significant problem in assessment policy and practice; the revelation in August 2020 that our post-16 testing system could not be adapted to take account of the disruption wrought by the pandemic has left us with fear and broken trust in this important facet of education.
Assessment might not seem like an exciting topic, but it garners public interest often revealing what I term a love–hate relationship. People love the certification and selection that the results of standardised testing provide; but also despise the way that test judgements influence personal opportunities and even label students, their teachers and their education institutions. There is plentiful evidence (see the classic work from Black and Wiliam) arguing for formative assessment methods, particularly those based in classroom practice improve motivation in students and are the most accurate indicators of learning and therefore the best was to decide who is most suited to a particular employment, or training or higher education. These are fundamentally life-shaping decisions that need to be guided by the very best evidence, however, the default position is creating judgments based on testing to summarise ability, skills, knowledge etc. We are happy to continue making important life decisions based on practice that is stressful at best and a form of abuse at its worst. I don’t believe that we should end testing, but it needs to be put in its place so we can focus on a range of ways to help students see what they can do and stop labelling themselves as a C, an A*, or a failure.
Assessing learning in FE contexts
In March 2020, then education secretary Gavin Williamson announced the cancellation of the A level examinations and the prime minister said “We will make sure that pupils get the qualifications they need and deserve for their academic career”. Our UCL IOE blog at the time argued that creating credible grades for students who would not face their examinations was a challenging task and shone a light on the risks inherent on dependence on end of course exams, especially the reduction in teacher assessment. The reform of A levels from January 2013 meant that the qualifications reverted to end of course assessment with no or strictly curtailed coursework in all subjects.
Ironically, if A levels were still modular there would be lots of information on which to base decisions about what grades to give students and whilst modularity has drawbacks, it allowed students to accumulate formal evidence over a two-year period. It would not be easy to create a coherent grade to award from limited evidence of this kind, but it would provide something more accurate than the Teacher Assessment Grades (TAGs) that were relied upon in 2020 and 2021. It is most ironic that it is those students who are least valued in terms of their qualifications – those taking vocational courses – have had the most reliable evidence available to grade their work.
Reverting to exam-led qualifications has challenged the importance of teacher judgement and have had a damaging, and unjustified, impact on the perception of teacher professionalism. Whilst there are studies which demonstrate the bias in teacher assessment, this is not a simple matter and we should perhaps explore why teachers might feel partial before considering how such behaviours can be changed. We rarely face up to the reasons that encourage teachers in the post-16 sector to: (a) Over predict their students’ grades for national exams; (b) be biased in marking coursework (when it existed for GCSEs and A levels) and (c) provide “additional” support for coursework to ensure good results. The reason for all of these is accountability – the pressures placed upon teachers and educational establishments lead to less malfeasance than we might expect given how exam results shape the public view of schools as ‘good’ or ‘bad’ (another binary view of education). It is in the interests of teachers to predict and aim for high grades at any cost because to do otherwise is reputational suicide.
The other important factor at play in terms of how good teachers are at assessment – i.e. unbiased and skilled - relates to their knowledge of assessment in such high stakes contexts. Teachers are simply not skilled enough to undertake the level of assessment that has been expected of them in the past two years. This is not their fault; teacher education includes only a tiny focus on assessment and the funding for CPD in England limits the extent to which schools decide what can/should be funded for their staff. Decisions made by the various education secretaries since 2020 have seen numerous u-turns as the complex and imperfect nature of determining a national system of grade awarding slowly dawns on policy makers. The reliance on teacher assessed grades has been misguided in the extreme and reinforces the way that education, and teachers, are used as a political football. The development, grading, moderation and awarding of examinations such as GCSEs and A-levels is a difficult and highly skilled task. Employees in awarding bodies work for months and years with expert examiners to manage national awarding processes and the idea that teachers should be responsible for this was a grave error. Attempting to create nationally standardised outcomes - grades for GCSEs and A levels - is very difficult when you rely on a narrow means of testing like an exam series. There simply was, and is, no room for error and certainly no way to manage the fallout from a global pandemic.
It is also easy to forget that while educational establishments were attempting to address the issue of evidence gathering for high-stakes exams, they were also expected to continue teaching and operating as close to normal as possible. The disruption of moving online and of staff shortages due to COVID-19 added a further layer of complexity to the day-to-day challenges faced by students and teachers alike. In addition, it’s important to remember that teachers are rarely dealing with a level playing field when putting together a picture of the individual achievements and potential of students at 16 and beyond. Many factors impact potential and actual achievement in high-stakes tests, and poverty remains the most significant impact on student achievement, both in school (classroom assessments) and in their results in high stakes tests in England (externally set and awarded assessments). The disparity between those who have and those who have not in educational terms, continues to widen. These are testing times indeed.
Politicians regularly discuss high stakes national assessments, those standardised tests and qualifications designed as certification for employment and/or further study. The focus on this relatively small number of assessments occurs because education is intertwined with economics and assessment (via high-stakes testing) as a recognised means of signifying national and international success. Globally, the value of high grades is evident: they are more likely to equate to higher pay in employment or entry to highly rated universities. Such views are situated in a theory of social mobility too, a political carrot and stick: do well and the rewards are great but fail and your future is bleak. Such is the power of educational outcomes.
Contemporary educational aims rest on a notion that “competition is a good thing” and that its promotion is obvious. Presenting this ideal in relation to assessment reveals a narrative and champions education for enterprise, employment, financial success and national economic good. This narrative normalises the concept of education as a place of competition where only the best win prizes and the need to compete is an expected human trait that is advantageous and inescapable. Any substantial changes to the assessment systems will lead to discussions about economics because national, testing is big business that generates millions of pounds throughout the assessment sector, for example a GCSE in England costs about £35-80 per subject and A levels range from £85-160 depending on subject (see the exam board websites for fees). In England, all national assessment business is of social and public concern, because the cost of test-taking in state schools is funded by the taxpayer. It’s perhaps simplistic to bring the argument down to finances, but the reality of the situation needs stating because the economic impact of test taking is an important part of wider concerns in debates about educational expectations.
The current modus operandi across education at all phases reflects the pervasive neoliberal discourses in society at large: it is the language of economic markets that defines such ideals, and the enthusiasm for competition connects ideas about learning, teaching and success in education. These connections are not easy to establish because they often include personal opinions, for example, whether we believe one subject to be “more important” than another in a curriculum, or whether it is better to have externally set and managed examinations of knowledge and reject teacher-led in assessments that result in formative or diagnostic feedback. Given the complexity of these issues and their importance in society, it’s time to have informed debates that engage everyone, not just key stakeholders in education, but the general public too. A way forward is commitment to a programme of assessment literacy – this is not a pipedream, it could really change policy and practice, and perhaps most urgently, would help to improve public confidence in educational assessments linked to high-stakes qualifications.
The leverage of literacy
Assessment literacy is not a new idea; it emerged in the 1970s through the work of US academics and researchers in education. Just as we know that literacies relating to reading, writing, number, media etc. impact human health, wealth and mobility, being literate about education assessment improves understanding of educational achievement. Assessment literacy also improves our shared understanding and goals for educations, and it helps us on a global level to appreciate the different cultural and contextual conditions that influence educational systems around the world. Assessment literacy matters because there are ethical priorities tied to how all stakeholders use and interpret the results of educational assessment. This might seem like an ambitious goal, but thinking back to the letter to Santa, isn’t it time we faced the enormity of what testing does to young people at a critical time in their lives? Using the right kinds of assessment that “fit” the student, the subject, the context, the teacher and the college all serve to improve the overall educational experience. Such integrity to support ambitious improvements can only be achieved by pledging an allegiance to assessment literacy.
At a basic level, assessment literacy is “the ability to understand assessment and then use it appropriately within the educational context that you are working” (Richardson, 2022:101). Whilst this appears straightforward of course it is not because as any assessment researcher will tell you – there is no perfect assessment, it is all riddled with error and uncertainty. Teachers know this but are often unable to make changes because they lack the agency to enact such decisions – it comes back to accountability and the context of the assessment. Power and agency to make changes is not always within their grasp, as Looney et al (2017) argue,
When teachers assess more is in play than simply knowledge and skills. They may have knowledge of what is deemed effective practice, but not be confident in their enactment of such practice. They may have knowledge, and have confidence, but not believe that assessment processes are effective. Most importantly, based on their prior experiences and their context, they may consider that some assessment processes should not be a part of their role as teachers and in interactions with students. Teachers can, quite literally, have mixed feelings about assessment.
(Looney et al., 2017:455)
Any serious consideration of promoting assessment literacy would have to include how teachers and students understand their summative and formative experiences. Two things that provide a good evidence base for supporting assessment literacy: firstly, use the right assessment for ‘the job’ and this means the results are recognisable as a characterisation of student performance. Second, and perhaps most difficult of all, accept the fact that good, influential assessment happens in classroom settings; indeed, teachers are to be trusted in this matter.
It’s time for a new way of talking about assessment and affording it some kudos via a narrative of what I call assessment esteem: that is, assurance that different types of assessment all have their uses when used appropriately, and that all have educational value, even if they don’t lead to a grade. Perhaps the biggest challenge for stakeholders in education and beyond is learning to live with uncertainty and accepting that there is not one true way to assess. Such a dramatic shift in thinking might improve the confidence we have in education and even reduce the stress and anxiety that sit uncomfortably alongside our current system of post-16 education.
Indeed, if can reorientate the view of assessment: from a discrete measure of ‘stuff’ to a means of developing better reflection and understanding of learning, this might actually prove to be more useful for us when facing and managing change per se.
Looking ahead, what might a solution be? A pragmatic resolution was suggested in 2021 by former teacher Tom Richmond. At 16, offer a reduced range of exams at GCSE: for example, keeping the heart of the curriculum with English and Maths and selecting, on a rotating basis, one or two other subjects year on year to assess at national levels. Such data sets would provide ample evidence to direct students’ post-16 plans and allow redistribution of funding to resources and more support for those following vocational qualifications and A-level programmes of study. The idea of sampling the focus each year in this way at 16 could reduce the competition relating to subject value and could open a new focus of assessment literacy: what matters to individuals in relation to how they learn and engage with different subjects across a curriculum.
The AOC is developing principles relating to assessment reform. These are underpinned by the need for clear aims and values that can be understood and used by stakeholders – in particular the need to align post-16 assessment with life outside the college. Central themes will run through the principles to ensure they are based on high-quality evidence, contemporary approaches to learning (including new technologies) and to ensure they are sustainable. As such, the AOC proposes the following:
- Assessment which promotes inclusion and equality
- Assessment which serves the needs of learners and the curriculum
- Assessment which values achievement and supports progression
- Reducing the workload and cost of assessment
- Applying new technologies to support assessment
- A policy and regulatory framework which is fit for purpose
- A clear process for change with all stakeholders on board.
What these principles reveal is a commitment to change and an acknowledgement of what assessment could be. They reflect that fact there is a need in England to a need to revisit the persistent but old-fashioned form and structure of A levels and the entirety of the vocational sector at post-16.
A central tenet of assessment literacy across all phases of education must include an informed, equitable and balanced discussion of how different assessment formats are not in competition with one another, but instead are appropriate to context, subject and individual. We know that attempting to ascertain a student’s competency and skill in football cannot be judged by asking her to write a timed essay about it, yet we privilege subjects that require that kind of assessment. Given the extent to which our knowledge and understanding about learning, teaching and assessment has evolved over the past two decades alone, surely change must be achievable? It would not be easy; human beings don’t like change, but if the pandemic has taught us anything, it is that we can adapt fast and that there often hidden benefits in those changes that we feared.
Final thoughts
Public understanding of assessment is something that is resolutely ignored in educational settings and it remains seldom discussed globally, yet assessment outcomes are continuously scrutinised and analysed across public domains via news websites, social media and other online forums. This is a glaring problem and yet those managing educational assessment (e.g. policymakers, exam boards, schools) do little or nothing to improve the situation.
Good information about educational assessment is hard to find. Even if you are able to ferret it out, what is provided is mystifying, puzzling, confused and full of misinformation so how are we supposed to make sense of what ‘good’ looks like. These issues appear to lack solutions. Instead, the default position is to support status quo: a continuously narrowing curriculum checked with summative tests that are deemed to meet a gold standard.
An emphasis on competition cultivates a culture that is based on unforgiving responses to failure or underperformance for both teachers and their students. The system suggests the idea of failure is challenging and something to be avoided; but conversely, students are advised to be resilient, to adapt and to accept failure – it will make you stronger, a better learner. Such incompatible messages foster unrealistic expectations and can result in drastic reactions when things don’t go as expected. It should be genuinely disturbing to us all that a minority of assessments shape our lives and those of children and young people .
We need a significant change across our approaches to educational assessment in this country. We all deserve to be assessment literate - simply put, we need to understand both the benefits and limitations of educational assessments in their broadest terms. Good assessment helps learners to engage with their learning, to be motivated to learn more and to be enthusiastic lifelong learners. In order to do this, we have to move beyond this narrow view that ‘good’ learners are those who always get the best grades; this is such an old-fashioned way to view education and is insulting to both teachers and students. To paraphrase Lewis Carroll, all those who take part don’t have to have prizes. Instead, we should reorientate our view of what constitutes an education, how we best support learners and learning in a range of contexts and at a variety of levels.
No one could have predicted that Covid-19 would happen, and educational establishments would be forced to close and then two national exam series would be severely disrupted. But now that this has come to pass it has shone a light on the numerous risks in strongly privileging assessment by end of course exams.
I am confident and hopeful of change, because it can happen. Just to repeat a message to readers who might be skimming this text: I am not anti-testing or opposed to examinations as a form of assessment. Rather, terminal exams need to be put in their place, so teachers can be confident practitioners who are encouraged to use the best of assessment in ways that are creative and inspire students to be confident learners. I hope that these extraordinary circumstances provide new opportunities to look at how we assess and award the educational achievements of our young people. It’s 2022, not 1902. It’s time to invest in assessment for teaching, for learning and to create a better public understanding of this messy, fallible facet of education. Perhaps if we take some chances, we might decrease the pressure in the system and then teenagers won’t feel the need to write to Santa asking for help instead of presents.
References:
Looney, A., Cumming, J., van Der Kleij, F., and Harris, K. (2018). Reconceptualising the role of teachers as assessors: teacher assessment identity. Assessment in Education: Principles, Policy and Practice 25, 442–467.
Richardsonm M. (2022) Rebuilding public confidence in assessment. London: UCL Press.
The views expressed in Think Further publications do not necessarily reflect those of AoC or NCFE.