Data Review – Leading a Department 6 (Termly)


This is our sixth part of the #LAD blogs, looking at some key ideas, thoughts and/or systems that might be of use to a department lead. This week looks, for the first time, at a termly consideration – how to best analyse termly data, such as end of unit tests of possibly mock exams. I share some of the thoughts, ideas with planning this yearly event.

While this system has been listed in the ‘every term’ section, it is important to note that not all ‘terms’ are created equal. For some schools the traditional three term, six half term model holds true. While some other school settings may have subdivided their year into larger or smaller chucks, terms, learning cycles or whatever they might be called. Either way, this system should be run after the pupils have completed a substantial assessment, such as external or mock exams, end of unit test etc. 

It helps with this section to explain what was wrong with a previous system, that I am sure some schools may still be using. It’s not their fault, this system is common in most areas where assessments of any kind are administered and acted upon. The original system we had was;

– Pupils sit an exam paper.
– The exam paper is marked by the teacher.
– The data is uploaded onto a shared document/system such as SIMS or Bromcom.
– Pupils are ranked by their outcome, which could be a score or an assigned grade.
– The pupils who failed the paper, in this case let’s say it is a GCSE mock, by getting a grade three or four.
– Those pupils are targeted for intervention by the class teacher or the department. This could be to try and increase the schools progress score as well as their key performance indicator in percentage of pupils who passed the course.

This system is, or was the standard, in many schools and on the outset it does look like a good one to follow, find the pupils performing worse on the assessment and focus on them in terms of closing the perceived gaps. However, when you look more closely into this, and in particular when you consider some of the key measurables schools are held to account on this has a few drawbacks;

– Pupils who achieve well, in the view of the teacher – let’s say a grade seven,  might actually be capable of more. If for example they previously had high achievement out of key stage two. However, as they got a strong pass, might be overlooked by staff for in class intervention.
– Pupils who achieve poorly, in the view of the teacher – let’s say a grade three, might actually be over achieving. If for example they previously had lower than expected achievement out of key stage two. However, as they got a GCSE fail, might be included in the intervention list.
– From a moral point of view the lower achieving (in the view of the teacher) pupils might always find themselves on the receiving end of these extra curricular sessions due to their low comparative grades or scores.
– From an individual pupil point of view, you will always struggle to let pupils reach their full potential if you always measure them against the standard grades when in fact they would be better served looking at the individual progress based on the key stage two starting point.

So what is one solution? Now first off I are not claiming to have invented a solution for this problem. I have just used some simple sums to best find the pupils we want to support. The aim of this strategy is to try and find a way that truly targets the pupils who are in most need of help. Who, from a simple analysis of the data requires more support in lessons when compared to the outcomes achieved by their peers? The steps outlined below are how we think we can do this;

Before the assessment – how to group the pupils

1) Pupils within the year are placed into groups based on any measure you wish. This is just on a spreadsheet, it doesn’t have to match the classes they are in for your subject. It doesn’t even have to be shared with pupils or other staff. It simply has to exist to help us compare ‘like-for-like’, whatever that might be for you. For example this could be on reading age, key stage two average point scores, GCSE minimum target grades in  particular subjects and so on. Ideally you want a relatively small number of groups. This is so you can have a relatively large number of pupils in each group. Four groups of twenty five would work well with a year group of one hundred pupils. In our example we will call them groups, A, B, C and D.

After the assessment – how to analyse comparative performance

2) Input the raw score/percentage for each pupil, it shouldn’t be a grade. This is for two reasons, one it removes the uncertainty of what grade the work actually is at and it allows us to better compare the pupils. This is easy to see when you consider a mark increase of 10% could still result in the same grade. Grades reduce the resolution of the data.
3) For each of the groups, A-D, work out the average score for the pupils. So take the twenty five pupils from group A, and calculate an average of their results.
4) For each student, work out how far away they were from the average calculated in their group. Those who did well, scored higher than the average will have a positive number difference. Those who did poorly, scored lower than the average will have a negative number difference. 
5) Rank the students within their group based on this ‘distance from the mean’, in this example each group A-D will have a rank of pupils from one (highest score above the mean) to twenty five (lowest score below the mean).
6) Divide this ranking into four sections, so rank 1-6 will be in the first quarter, rank 7-12 will be in the second quarter, rank 13-18 will be in the third quarter and rank 19-25 will be in the fourth quarter. 

after the assessment – What to do with the data

What to think about when looking at the four quarters;
7) The pupils who are in the first quarter (from all four groups, A-D) are doing well compared to their grouped peers. Remember the peers are determined by how you grouped them in step one. These pupils could be targeted for stretch and challenge work above and beyond the normal expected for the pupils. It is worth remembering that all the pupils in the first quarter may not all have the highest grades, just higher comparative outcomes when compared to their peers within the same group.
8) The pupils who are in the second quarter (from all four groups, A-D) are doing well-to-average when compared to peers. These might be in your early warning group, pupils who are approaching the average mark for the groups.
9) The pupils who are in the third quarter (from all four groups, A-D) are doing less well-to-below average when compared to peers. These might be the pupils where you target you introductory intervention, for example in class priority of questioning and support.
10) The pupils who are in the fourth quarter (from all four groups, A-D) are doing poorly when compared to peers. These might be the students that you ask to attend booster sessions, provide extra resources to, make contact with home with as they are struggling the most with the assessment when compared to pupils with whichever measure you grouped them with initially. There might be some top scorers in this group, but remember, they are in this group because comparatively they have under performed when compared to their peers in the same group.

Everything seems ok, but what are the caveats? There are some instances where, depending on pupil group and outcome there might be some undesirable outcomes that can present themselves. These are;

– Imagine a pupil in our Group A (assuming that we have set on key stage two average point score) who achieves a mock GCSE exam score of 87%. Had she sat that actual past paper she would have got a grade 9. All of her peers in the same group achieved 90%+. Even though she topped out on the exam paper in terms of grade achieved, she would still be in the lowest quarter. If she was targeted with intervention this might promote confusion (with parents, pupil and staff who didn’t understand the system) as to why a pupil is being asked to stay behind for booster sessions when they got the top grade. This is of course an extreme example but I have seen similar with top end pupils where there is little variance in the marks achieved on a paper.
– The opposite is also true, let’s assume a whole year group sat a maths GCSE mock paper and all scores were below 15%, a truly poor set of outcomes that a school might dread. With this system, 50% of the pupils, those in the first and second quarter would receive no support if the quarter-groups alone were used to gauge its distribution. The focus would go on the bottom half of the year even though all pupils could be failing the exam.

It is for this reason that this quarter system is used in conjunction with others. It only paints one picture in terms of pupil’s outcomes, and that picture is a comparative one between pupils, not how well the pupil did, but how well they did compared to others. Other items that may go into your decision making of who needs support could include;

– Prior attainment
– Special educational needs
– Flightpath
– Grades on past papers
– Attitude to learning
– Attendance
– And many more.

As is common here, the above list is not exhaustive. If anything it simply goes to prove the point that there are a vast amount of variables that go into how young people learn and remember skills over time. At best what we have tried to do here is develop a tool that might be better placed to help identify those who might have fallen in between the cracks of the assessment progress. High scorers who are underperforming compared to peers (who would miss key intervention and support from their teacher in or after lessons), and, low scorers who are over performing compared to their peers (who might miss the praise associated with doing well, and the stretch and challenge opportunities to do even better).

This option also allows you the opportunity to compare class to one another to see if there are some strengths in teacher practice that can be shared across the department or possibly if a colleague requires more support in the day to day practice or subject or curriculum knowledge. For example, is there a particular mixed ability class who has a higher proportion of top quarter pupils (anything above a quarter of the class), or is there another class where over half of them are in the bottom quarter of the their groups.

I’d be interested to hear your thoughts, how you use data to help pupil progress. What do you do that you feel is novel and effective. Feel free to drop me an email on my Contact Page.