How do we make education evidence-led?

by Stephen Tall on March 10, 2014

education fringe mar 2014That was the topic up for debate on Saturday at the Lib Dem conference fringe meeting organised by CentreForum and the Social Liberal Forum.

I was one of the four panellists – speaking with my day-job hat on – alongside James Kempton from CentreForum, former Lib Dem education spokesman Lord (Phil) Willis, and current Lib Dem schools minister David Laws MP.

Usually when I speak at meetings, I work from scribbled notes. As I was representing the Education Endowment Foundation (EEF), I actually wrote a text, pasted below, and worked from that – which inevitably made my “5 minutes” (ahem) way too long. Apologies to the chair, Helen Flynn! (I ditched about half of what appears below.)

James Kempton highlighted the excellent work that he and CentreForum have been doing looking at the crunchy, crucial issue of high-quality CPD for teachers. That might sound very nuts-and-bolts – well, that’s because it is nuts-and-bolts. If only recent governments had devoted more time to it rather than the more contentious and less useful issue of school structures.

Phil Willis delivered a passionate denunciation of the Gove agenda as evidence-free which was enthusiastically received. And he added an edge to the debate by saying it was a shame the government had supported the EEF, arguing the grant we received from the Department for Education was a waste of money because there’s already too much research. However, as he then went on to call for more teacher-driven research – which is exactly what the EEF does – I decided to assume he must think we’re a good thing even if he doesn’t realise it.

David Laws was, as ever, thoughtful, precise and measured – honest about some of the disagreements between the two parties in the Coalition’s education policy (eg, the importance of the mediating tier between Whitehall and schools); honest, too, that there were aspects around initial teacher training and CPD where more focus was needed.

Anyway, here’s what I had to say…

How do we make education evidence-led?

 

The problem: the attainment gap

I want to focus on just one aspect of the evidence in education debate. It’s the one that the charity I work for – the Education Endowment Foundation – was created to help tackle: the attainment gap which divides rich and poor pupils. It is, I think, the most important issue facing British education today – especially so for the Lib Dems who want “to enable everyone to get on in life”.

Many of you will know the statistics already, but let’s remind ourselves:
• 1-in-4 children grows up in poverty;
• The attainment gap between children from rich and poor backgrounds is detectable at as early an age as 22 months;
• 100,000 children fail to reach the expected level in English at age 11 – one-third of these are children eligible for Free School Meals (FSM);
• FSM-eligible pupils are half as likely as their better-off classmates to achieve 5 good GCSEs; and
• Young people with poor educational attainment are much more likely to end up not in education, employment or training (NEET).

Put simply: there’s an attainment gap between rich and poor kids when they start school. And that gap gets bigger every year they’re at school.

So educational inequality is a big problem. And it’s one where the UK fares worse than our closest equivalents in the OECD.

Are schools the answer? In part, yes

Given a lot of what I’m going to talk about is school-related I want to make something clear at the outset. Schools cannot solve inequality. We should not expect them to solve inequality.

However, I do believe schools can make a difference in raising the attainment of disadvantaged pupils and closing the gap. In fact, lots of them are doing it already. Nearly one in seven secondary schools in 2012 saw pupils eligible for free school meals perform above the national average for all pupils at GCSE.

So we know it can be done. But it’s harder to know how it can be done.

What do schools need?

Is it simply about spending more money?
Well, the relationship between expenditure on schools and pupil attainment is fuzzy, both internationally and domestically. For instance, over the period of the last Labour government, spending per child went up in real terms by 56% in primary schools and by 75% in secondary schools. Did attainment increase by the same amount in that time? The evidence from international league tables – PISA, PIRLS, TMMMS – suggests not.

Is it about increased accountability and compliance?
When we talk to teachers and head-teachers there are few who think the problem is that Ofsted needs more powers.

Is it about (somehow) creating more “good / outstanding” schools (to use Ofsted’s terminology)?
That, after all, was the driving motivation behind Labour’s sponsored academies and the Coalition’s ‘academisation’ and free schools drive. I’ve no problems with school improvement, but don’t expect that in itself to solve the attainment gap. Children from poorer backgrounds perform, on average, worse than their wealthier classmates whichever type of school they are in, whether ‘poor’ or ‘outstanding’.

The EEF is an independent charity. We don’t take a view on the politics. We look at the data. As Andreas Schleicher would say, “Without data, you are just another person with an opinion”. And when it comes to schools no-one’s short of an opinion…

But the data is clear enough: despite the permanent revolution in schools, attainment hasn’t increased, and educational inequality persists.

What will make a difference?

At the EEF we think evidence can help, supporting teachers and schools to help them do an even better job with their pupils without spending more time or more money than at present. Our approach is quite simple. We start with what we know already.

Four years ago The Sutton Trust, the EEF’s founding partner, asked Durham University to produce a practitioner-focused synthesis of the international research on what’s known about improving teaching and learning. The reason they did this – well, it’s all Nick Clegg’s fault, really.

The Pupil Premium – money targeted at pupils from low-income households – is a fine idea. But the worry was (and to some extent still is) that schools wouldn’t spend it in ways that would make a real difference to the attainment of the children it was designed to help. So The Sutton Trust decided to put together a Teaching and Learning Toolkit designed to guide schools towards the ideas most likely to make a difference to poorer pupils.

This Teaching and Learning Toolkit summarises over 10,000 high-quality research reports from around the world. It produced findings that were well-known about in the field of educational research but less known in schools.

For example:
• reducing class sizes is a poor way to spend money if you want to boost attainment;
• setting or streaming classes by ability has a negative impact on children’s attainment; and
• teaching assistants, on average, make little difference to the attainment of pupils, and can sometimes even have a negative impact on pupils’ attainment.

These are, I stress, average impacts and mask much variance. Teaching assistants, if deployed well, can be beneficial. But too often they’re not given enough training or feedback. There are some 230,000 teaching assistants employed at an estimate cost of £4bn across the schools system. That’s a lot of money currently not being spent as effectively as it could be.

How does the EEF aim to help?

The EEF’s role is to build the evidence-base. We are fortunate enough to have a £125m grant from the Department for Education. And what we do is ask anyone with a good idea they think will help the attainment in particular of low-income children, and with some evidence of promise, to apply to us for a grant. We then test these ideas, rigorously, appointing independent evaluators to find out what impact they actually have. Essentially we’re trying to find answers to questions.

Here are some examples:

• Does learning Music help children academically? Plenty of correlational evidence it does. But very little causational evidence. So we’re testing in 15 primary schools whether singing in a choir or playing a musical instrument has a knock-on impact on children’s attainment.
• Can a behavioural programme focusing on pupils with prior records of truancy and exclusion not only help solve their behavioural problems but also improve their academic performance? We’re testing that in 40 secondary schools in London.
• Does teaching children to play chess boost their attainment in Maths? We’re testing that in 100 primary schools across five cities.
• Can peer observation by teachers, using a programme called Lesson Study, improve practice? We’re testing that in 80 primary schools across England.
• We’re testing controversial topics, such as incentives. Do pupils respond to financial or other rewards? Will parents engage more with their children’s education if they’re paid to take time off work to attend classes which equip them with the skills to support their children?
• Do Saturday schools actually improve attainment?
• Do volunteering programmes like the Duke of Edinburgh Award boost attainment?
• Can a school improvement programme modelled on London Challenge work outside London in narrowing the attainment gap?
• What impact, if any, does giving children a nutritional breakfast have?

In total, some 2,300 schools – 1-in-10 across England – are engaged in EEF-funded projects reaching more than 500,000 pupils.

How is impact best measured in education?

Of the 72 projects the EEF has grant funded, 62 of them are being evaluated as randomised controlled trials (the others use other quasi-experimental designs or are developmental pilots). Randomised controlled trials (RCTs) are often the controversial bit of evaluating what works in education because it involves having a control group – a group of pupils who don’t receive the intervention you’re testing so you can measure the difference it makes compared with business-as-usual. They’re common in medicine though only since the 1960s.

They shouldn’t be controversial among this audience because they were invented by John Stuart Mill. They’re based on Mill’s ‘method of difference’ through which the observed difference between two groups can be measured to identify the ‘active ingredient’ which works. Quite simply, they are the best way we know of finding out if good intentions are backed up be demonstrable impact.

Two quick examples of why RCTs are crucial… The first from Ben Goldacre’s work on research in education, Test, Learn, Adapt, though the example is beyond education:

‘Scared Straight’ was a programme developed in the US to deter juvenile delinquents and at-risk children from a criminal behaviour by exposing children to the harsh realities of leading a life of crime, through interactions with serious criminals in custody. The theory seemed sound and early studies, including in the UK, showed astonishingly high success rates. However, none of these evaluations had a control group showing what would have happened to these participants if they had not participated in the programme. When ‘Scared Straight’ was tested through RCTs it was discovered it in fact led to higher rates of offending behaviour. Put simply, it cost the taxpayer a significant amount of money and actively increased crime.

But RCTs are not enough. Put too much uncritical faith in evidence and it can lead you astray. Here’s an example from the work of Nancy Cartwright and Jeremy Hardie on the (mis-)use of educational research:

Small class sizes worked to raise reading scores in Tennessee, but the same policy failed in California. As often, with hindsight the explanation is simple. If, as they did in California, you introduce smaller classes on a large scale very quickly, and of course you have to have lots more teachers, you end up with a lower average quality of teacher, because you have to recruit less experienced or well trained people. And of course you have to have many more class rooms, so you take space for reading classes from other activities which themselves contribute to the flourishing of pupils and hence to their reading ability.

All the EEF’s evaluations will be reported publicly, no matter what they show – good, bad or inbetween – and their findings incorporated in the Teaching and Learning Toolkit now used by more than one-third of school-leaders up and down the country.

Next comes the difficult part which, with help from David Laws and the Department for Education, we’ve just begun – how to ensure evidence doesn’t just sit there, but gets actively used in schools. We’ll soon be announcing a new set of projects that look at how you can mobilise knowledge of what works in reducing educational inequality.

Three closing points

First, evidence-based policy is not about experts telling professionals, whether in teaching or anything else, what they must do. It is about arming them with the knowledge they need – and increasingly want – to be evidence-literate. And that evidence-literacy is crucial in working out what will work best in the context of your school. Because evidence can only tell us that something has worked there, not that it will work here.

Secondly, I think evidence is about professional empowerment. I don’t want evidence-led education; I want evidence-driven education. Whatever you may think about Jeremy Hunt as health secretary, you won’t find him barging into an A&E department telling the surgeons how to perform an appendectomy. I hope we can get to the point where teachers are able to turn to those in power – whether the education secretary, Ofsted or civil servants – who tell them how to do their job and require them to “Show me the evidence.”

Thirdly and finally, evidence is not a replacement for democratic accountability — unless your concept of democracy is implementing policy and ignoring its impact. It is about giving decision-makers, whether they’re professionals or politicians, the information they need to make informed choices, and how best to put your values – your values – into action.