Saturday, 22 July 2023

Formative assessment does not improve learning

Education in the UK is susceptible to passing fads and fashions, which often results in a knee-jerk reaction from those headteachers who fall into the trap of believing it is important to have in place every new shiny bell and whistle before an inspector rocks up at the gates. The truth of managing a school well is quite different; it is not the job of a headteacher to impose a never-ending cycle of initiatives, rather being a headteacher is about being the intelligent, humble custodian of an educational institution for a period of time. Doing the job well means being well informed, careful, and strategic. Our knowledge of how learning occurs continues to increase and mature – headteachers should, of course, be cognisant of this new knowledge and should plan to bring those advantages to their pupils in a well thought through approach with the support of their well-trained staff. This is a far cry from forcing intelligent teachers to implement unproven fads and fashions at the drop of a hat.

A major problem arising from the UK’s tendency to take the scattergun, shiny new toy approach is the meaning of highly effective approaches, with firm findings from research to support them, is lost or misconstrued. Instead of careful integration of new educational knowledge, all too often we see well defined and well-designed pedagogical improvements being reduced to meaningless, mind-numbing initiatives.

Take, for example, the knowledge discovered and refined over many decades by educators, cognitive scientists and psychologists, there is clear benefit to pupils when they are asked to recall previous ideas and this benefit is even present when pupils are forced to retrieve memories simply by encountering questions without the need to answer them correctly – a ‘testing effect’ arises, where learning is improved as pupils necessarily have to re-member, which is to say they must create a new memory. It is an interesting finding and is reliably replicable in the laboratory and in the classroom. Headteachers and teachers should be aware of these benefits and plan for occasions when pupils will be required to retrieve prior knowledge. There is much more to it, of course – the literature on ‘testing effect’, ‘retrieval practice’, ‘spaced learning’, ‘method selection’ and myriad other intriguing consequences of recalling earlier learning is extensive and goes back many decades. Teachers have known about these effects for a very long time indeed and, for many expert teachers, it is an embedded stratagem in their repertoire of pedagogic choices. All well and good. But then…

Well, then along comes a fresh packaging of these old ideas and a PR campaign to promote them. However, contained within the fresh, shiny new packaging is not a detailed, nuanced, complex exploration of a profound idea, but a significantly diminished version – one designed to fit with today’s need for pithy explanations and easy implementation. In our example, the result is a diktat handed down by the headteacher, who does not wish to engage with the literature and is content with a pithy summary, demanding all teachers must now bolt on to the beginning of every lesson a 10 minute ‘retrieval practice starter activity’. And so, once again, the profound becomes the mundane, the impactful becomes the time sink, the years of research and development becomes the easy to roll out initiative. Box ticked, inspector happy. Or so the unthinking headteacher smugly assumes.

There is, of course, nothing new about this problem. The dumbing down of educational discourse has mirrored the very same dumbing down in all public debate since the mid 1990s. Though, given education should be in the very business of defending knowledge and truth, it is particularly saddening schools have given in to the modern pressure to do away with nuance and conviction to be replaced with anodyne, simplistic consensus.

I recall attending a training day in the late 90s at which a colleague I had hitherto regarded as smart and diligent presented a session on Paul Black and Dylan William’s rather excellent assessment summary, ‘Inside the Black Box – Raising Standards through Classroom Assessment’, which I had already read with interest. My colleague rushed through a few slides and ended with an instruction to all teachers their lessons must now contain ‘formative assessment’ because ‘formative assessment improves learning’. This was curious for two reasons. Firstly, how odd it was to assume any teacher sitting in the hall that day did not already frequently use formative assessment approaches as a normal part of their teaching. Secondly, it is incorrect.

After about the fifth or sixth time of my colleague saying the line, ‘formative assessment improves learning’, I raised my hand to ask to speak. A groan from those teachers who just wanted all of this to end. And then I said, ‘it simply isn’t true to say formative assessment improves learning. It does not. It is the actions one takes based upon this formative assessment which might possibly improve learning’.

I didn’t think this a controversial thing to point out, but his presentation contained absolutely no discussion of what pedagogic actions teachers could take based on different outcomes from the formative assessment activities he was promoting (which, by the way, appeared to be colouring in pupils’ names in either red, amber or green on a spreadsheet for no discernible reason).

So, there are two outcomes from the recent trend to dumb everything down; really bad ideas make it into the classroom and cause teacher burnout from initiative overload, and really good ideas are so diminished they become easy to dismiss or to demonise by those who do not act in good faith.

By way of example, consider the debate around Deliberate Practice. Deliberate Practice is a long-established approach which, although variation exists in its implementation depending on circumstances, has a set of core elements defining its use. It has its proponents and opponents. And, because so many people refuse to properly engage with the literature, it has been easy for its opponents to claim Deliberate Practice ineffective and, therefore, to steer teachers away from its use. Perhaps the most egregious example of this was the 2014 meta-analysis by Macnamara et al. This study set out to diminish the importance of Deliberate Practice and concluded it ‘not as important as has been argued’. Many people quickly pulled together pithy reports (or even Tweets) to announce the nail in the coffin for deliberate practice. Yet few (perhaps none) bothered to read and stress-test the report. In fact, Macnamara et al made this claim even though their own report found an effect size of 0.38 for the influence Deliberate Practice has on performance. You may well ask, is an effect size of 0.38 worth changing policy over? Well, to put that figure in a more everyday context, here are the effect sizes some certain behaviours have on mortality: obesity (0.08), excessive alcohol consumption (0.13) and smoking (0.21). These are much lower, but we all agree are not to be ignored (indeed, consider how much public money is spent trying to change these behaviours and you’ll have a good sense of how important society thinks these effect sizes are).

So Macnamara et al find a significant effect size and yet the wording of the report is used by many to claim Deliberate Practice is not important. That’s odd. But nowhere near as odd as the truth.

Of the 88 studies Macnamara et al used in their meta-analysis, 18 were not even about Deliberate Practice. This enabled the reporters to manufacture a lower effect size. When those 18 studies (which contribute 45 effect sizes) are removed from the calculation, the effect size for Deliberate Practice increases further still. Why include 45 effect sizes in a report about Deliberate Practice when those effect sizes are completely unrelated to Deliberate Practice? Could it be because those who wish to oppose (or support) a specific approach (perhaps because of their own ideological beliefs) know the teaching profession has succumbed to the soundbite? Thankfully, we have the likes of SD Miller et al who are not willing to simply accept a pithy line – this group re-examined Macnamara et al (2014) in their 2020 paper ‘To be or not to be (an expert)’ and highlighted the flaws in its methodology.

So, should teachers take time to plan for Deliberate Practice and formative assessment and a whole host of other approaches painstakingly evaluated for efficacy over decades? Yes. But…

As I said to my colleague all those years ago, formative assessment does not improve learning, it is the actions one takes based upon the formative assessment that count.

Learning is complex. There is not a single method to be deployed and all will be well. Rather, it is about the combining of tactics and choices, in real time, with real teachers and real pupils in a responsive dialogue with each other. Deliberate Practice does have a good impact on performance, but it’s just one of many ways of increasing the chances of successfully bringing about long-term, durable learning.

We should, therefore, deploy as many of these proven approaches as we can as pupils ascend from novice to expert with any new idea. There is a cumulative effect. In much the same way taking steps to avoid obesity (ES = 0.08) does not really shift the needle in terms of how long one is likely to live, combined with avoiding excessive alcohol consumption (0.13) and smoking (0.21) and other known impactful behaviours, the cumulative effect really starts to make all the difference. As a pupil moves from a novice appreciation of a new idea, teachers can draw on proven methods of checking and securing prerequisite knowledge, bringing about awareness of new knowledge by making connections, moving the knowledge from an initially inflexible state to a flexible one, such that motivation to push ahead can be achieved and teacher and pupil can continue the dialogue in a responsive cycle of teaching and learning until the pupil gains an automaticity with the new idea and can use this success to move through naive, purposeful and finally deliberate practice, which, in turn, gives the pupil such secure foundations and a fluency of understanding that they might make wider connections in knowledge and behave as a domain expert might behave to achieve a mature appreciation of the new idea. Moving through these phases of learning, the teacher deploys numerous proven tactics, each with their own effect. Standing alone, these individual effect sizes may well be small, but combined the overall impact is great.

In my 2019 book, Teaching for Mastery, I described the journey through these phases of learning in a single diagram of the mastery cycle and emphasised the importance of not breaking the cycle. The elements are of little utility when used in isolation – the power comes from combining them in an expert teacher repertoire. Do not allow a consultant to tell you things like ‘formative assessment improves learning’ without insisting the conversation goes further to explain all educational interventions, such as formative assessment, are but one part of a whole.

2 comments:

  1. This point should bt need making.
    But it absolutely does.

    ReplyDelete
  2. What a refreshing read! The first 2 paragraphs capture the rot in our education- the pulse. We beat around the bush about pay and workload & fail to see that the elephant in the room is an overstuffed bag of initiatives emptied out in classrooms by leaders are not in the chalk face. During my PGCE we had a session by Harrison-she brushed aside my question about how her assessment model was going to work in Britain when her research was on education systems in other countries. I guess I was questioning whether the model was for “real teachers & pupils”. New practices must be debated fully by those who will be implementing them & they need to be trusted to do what is fit for them & their pupils.

    ReplyDelete