6 big benefits of effective data tracking

6 Big benefits to effective data tracking - ThinkLit

6 big benefits of effective data tracking is a great article to get you started if you are a beginning teacher, or are thinking about how to improve your outcomes. Let’s get stuck in …

What is data tracking?

Data tracking in post-primary schools is the measuring of pupil progress against baseline data. Teachers use it support students in target setting, and then they use it to measure whether students are working towards their targets. It is a very helpful tool for teachers and schools, when used correctly.

To track data effectively, you need two types of data: the first is a baseline measure and the second is data collected from pupil results, scores, assessments etc throughout the year. The more data collected, the more meaning you can draw.

In this article, we will explore:

  1. How to get started with data tracking in your school
  2. What is baseline data?
  3. Interpreting baseline data
  4. What types of data can be tracked?
    1. Accuracy of data
    2. Reliability of data
  5. 6 big benefits of effective data tracking

How do I get started with data tracking?

The best approach to data tracking is a whole school approach. We know is true in all school improvement strategies: a whole school approach works best as we are all singing from the same hymn sheet, so to speak. Students and parents benefit from consistency across the school, with teachers all using the same language when discussing assessment and progress. And with all teachers pulling together, genuine progress can be made. 

I have seen this in practice in a school I spent many years in. With the data led by a very enthusiastic and knowledgable senior teacher, the whole school approach was embedded. All students were tested at the start of each key stage, and INSET days were spend on training and analysing data. In addition, parent information evenings helped to disseminate the information, and students were given a clear understanding of the process too. Several years of intense focus led to a significant improvement in results, with value-added results soaring. I know that not all improvement comes from one source, but I saw first hand how beneficial it was to focus so much on the data.

With that whole school approach in mind, why not raise this with your line manager, SLT or head teacher? You may be on the hunt for a whole school improvement focus, perhaps for a middle or senior leadership training programme. Or perhaps you are just ready to sink your teeth into a new project. 

If your school already has a good system in place for data tracking, the best thing for you to do it to get familiar with the system. Ask your data co-ordinator or SLT link for the data for each of your classes or students. This information may already be uploaded onto your school information management system (MIS) so you will be able to see the information when you look at your tracking sheets for each class. 

Look at the ‘interpreting baseline data‘ section below for more help if you are looking at this data for the first time.

What is baseline data?

Baseline data is the starting point against which other scores are measured. Baseline data is used to give an indication of student ability before their course begins. It gives an indication of their ability across different areas such as their vocabulary, their ability to use numbers, to manipulate shapes, to identify sequences and patterns and their ability to proof read. These tests are usually computerised and timed to ensure the baseline data is reliable and accurate.

There are highly effective testing companies and programmes which schools can use to measure the baseline ability of their students including the CEM Centre which uses MisYIS for KS3, YELLIS for KS4 and ALIS for KS5CAT4 is another widely used programme for testing, created and run by the GL assessment company. These computerised tests provide more accurate and meaningful baseline to measure against rather than prior attainment scores. This is because prior attainment is linked to socio-economic factors, attendance, effort, quality of teaching and learning, and many other factors. The goal of baseline data is to assess ability rather than learning. 

For ease, the examples I use below are from MidYIS assessment data, but the quartiles, bands and distribution around the average of 100 are the same. This means that if your school uses CAT4, the interpretations below should still be valid and useful.

Examples of baseline assessment test questions

The assessment tests are often designed to be adaptive, which means the questions adapt to suit the respondant, with more complex questions added to allow the student to show the full range of their ability. 

This is a sample vocabulary question with multiple choice options.
This is a sample maths question.
This is a sample non-verbal question.

Results of baseline assessments

Baseline assessments will generate scores for students for each category. Let’s take the MidYIS scores: results will be generated for the four areas of vocabulary, maths, non-verbal and skills as well as an overall score. All of these scores will be standardised against the whole cohort, ie everyone in the country who took part in the assessment. This means that scores are compared across different schools and regions, creating a national standard. 

The scores for each category, as well as the overall score, are shown on a graph for each student. An example of this is below. You will see the score presented as a dot, and then the confident/tolerance band presented as a line on either side of the dot. This allows you to see the range within which that score is accurate. In this case, 95% confidence means that the test is 95% confident within the range shown by the lines.

MidYIS results graph

What do these numbers mean?

MidYIS (and other tests used to collect baseline data) scores are standardised around the mean score of 100. This means that a score of 100 is exactly in the middle of the cohort. In fact, the scores are divided into 4 bands, quartiles, where the A band represents the top 25%, B the next 25%, C is the 25% leading up to the average score of 100 and D being the lowest scoring 25% of the cohort. 

 

Band

MidYIS Score

A

>110

B

100-109

C

91-99

D

<90

The majority of pupils score between 85 and 115. Scores over 130 are in the top 2.5% of all students nationally. Over 126 places the student in the top 5%, over 120 places the student in the top 10% and a score of 100 puts the student exactly in the middle of all students nationally.

Interpreting baseline data

Using the bands as predictors for end of course outcomes

Baseline tests will give predictions for GCSE outcomes. This is where many teachers get frustrated, but if you can wrap your head around the theory, then you can use your own knowledge and experience of your students alongside the data, to help you to set targets. 

Let’s take fictional student Larry as an example. 

Larry achieves average scores across all four categories, giving him an overall score of 103, and MidYIS tells the teacher that Larry is predicted a B grade in English as a result. What does this really mean in practice? Well, it means that nationally, students who score in similar ways to Larry most frequently go on to achieve a B grade in English. But Larry never does his homework, dislikes English and rarely tries; Larry consistently achieves C or D grades. What does this mean? It means that Larry is under achieving. He has more ability than he shows. The teacher’s job is to encourage and support Larry through effective teaching and learning strategies to achieve his potential, which is a B grade.

Another teacher might say that Larry is highly engaged, works well and is consistently achieving A grades. How can this be? Well, Larry is beating the odds. He is working hard and adding value to his ability. Teachers know this. It happens all the time. Kids like Larry bring joy as they are committed and make great progress. We know that ability alone doesn’t get grades and we love to see our kids work hard. We also know that sometimes, we can unlock their potential. A well planned lesson, kindness and encouragement, interesting teaching approaches etc all combine to create great learning environments for students, and Larry is benefiting. This is the very definition of value-added.

Using the bands to measure progress

With the baseline data saved onto SIMS or your spreadsheet, it’s time to get teaching and get assessing progress. Most schools will use continuous assessment to monitor the progress of students. Working alongside the assessment strategy of your school, you can measure your students’ progress against their assessment outcomes.

Students who are predicted an A grade in your subject really should be able to achieve an A grade, once the skills have been taught and practised. If they are working below their target, your strategies can kick in. How will you support this student to achieve their target next time? What needs to change? 

Equally, students who are meeting or exceeding their targets can be rewarded to increase their confidence. Perhaps they can be further challenged or their target could be adjusted upwards as they are making great progress.

In my spreadsheet, I use conditional formatting formula to automatically turn a student’s score green if their assessment is above target, yellow if on target and orange if below target. This helps me to see quickly who in the class needs a little encouragement or intervention.

Even vs uneven profiles - using the graph to indicate significant differences/biases

Student with even profiles are most likely to be able to reflect their ability best. In other words, a mostly even score in vocabulary (reading and understanding the question) and skills (proof-reading and working under time pressure) will result in a student who is able to write well. An uneven profile, for example a high vocabulary score but a low skills score, might result in a student who struggles to write accurately under time pressure, despite them having a wide vocabulary and plenty of understanding of words. Students with low vocabulary scores compared to the other three areas might have problems reading the question, resulting in lower exam performance in maths or science, despite their good understanding of the processes involved in the topic.

Understanding the profile and how to support students in their lower scoring areas can radically improve the outcomes for that student, and therefore for your whole class.

As a general rule of thumb, a profile is even when the confidence bars overlap. If there is clear space between two or more of the confidence bands, this indicates a significant difference which is worth investigating. 

Interestingly, many students know themselves well and have already made adjustments. For example, I know of students who sit at home with a dictionary and thesaurus when writing as they struggle to find the right word. Their profile may well indicate a lower vocabulary score, but their work might not reflect this as they are already compensating. They don’t need a graph to show them this. 

Measuring value-added

Value-added is a way of referring to the positive (or negative) influence of the teaching and learning strategies used by the student, the teacher, the pastoral system, the whole school, and any other factor which influences student outcomes. To measure it, outcomes are equated to numbers to make them comparable. For example, if a student is predicted a B grade and achieves a B grade, their value-added score is 0. If that student achieves a C grade, their value-added score is -1. In other words, they have under-achieved by one grade. Likewise, if they achieve an A grade, their value-added score is 1. 

How is this a useful activity?

Well, for individual students, you can compare their outcomes across all subjects. You can identify areas where the student needs to focus. 

As an English teacher, I like to look at the mock results for Year 11 and Year 12 to identify the value-added across my class. Counting the A, B and C grades is a relatively meaningless activity. For example if it is a top class within a grammar school and all students started the year on A grades, there is not much for me to celebrate if they all get B grades in their mock. However, If they came to me on C grades, I have added significant value. Assigning a score to the whole class makes this task quick. For example, an average value-added score of 0.5 means that the class are working (on average) half a grade above their ability … a hard working class that is doing very well. If the value-added score is -1.5, that is a area of concern … what is the reason for wide-spread under-achievement?

A case study

High ability profile with uneven scores across the sections.

The graph above belongs to a student I taught a long time ago – a real example from the field!

Observations:

  • This student’s graph shows very high ability – the average MidYIS score is 140, which is significantly above the average. 
  • This student’s strengths are particularly weighted in favour of maths and non-verbal i.e. transforming shapes, understanding sequences, identifying patterns etc. 
  • This student’s scores, although high across the board, are not as high in vocabulary.
  • This student’s profile is uneven – there is what I would call a significant difference between the skills score of 118 and the overall MidYIS score of 140. 

Actions:

  • Scores above 130 are in the top 2% of the cohort (nationally) and should be considered gifted and talented – this student should be actioned as G&T and have access to suitable learning resources.
  • Perhaps this student could be paired with a less able student in maths lessons and act as a mentor. Perhaps maths challenges and ICT clubs might continue to push and challenge them.
  • The lower skills score suggests this student may struggle to show the same ability in extended writing as they do in maths or science. As an English teacher, I would flag this student up as a potential underachiever, and watch their work carefully to see if the data leads anywhere.
  • If proof reading and accuracy are a target area, use spelling strategies, syntax and punctuation support etc to support this student to get their ideas down on paper with accuracy.
  • Does this student need to be assessed for an educational need such as dyslexia? It might be worth investigating.

What types of monitoring data can be tracked?

In short, assessment scores. 

Class tests, mock examinations, summer or winter internal examinations, vocabulary tests, end of unit tests, etc. All forms of recorded assessment can be tracked for a student, class or cohort. 

While the above is true, there are some criteria that should be met in order to feel that the data collected is meaningful: the assessment should be both accurate and valid.

For example, standardised scores can be used to compare like for like, if the scores are standardised around the same value. This is particularly easy and illuminating if the baseline data and the data collected from the school year is all standardised to a mean value of 100. 

Teachers and schools can use a range of different types of data to monitor progress, however the best data to collect should meet some key criteria. In order for data to be trackable and comparable, it should be both accurate and reliable.

Monitoring data should be accurate

Accuracy is an obvious starting point, but how specifically can data be accurately collected? Well, using a past paper is a good starting point, as this will measure pupil progress against an accurate standard. Mock examinations provide a great standard of accuracy as these assessments are timed, fair across all students in the cohort, and marked against genuine mark schemes. While it is difficult to replicate the exact marking standard and scaling of marks done by exam boards, it is nonetheless a fair form of data collection.

At Key Stage 3, your students may not be sitting an external examination, but the same idea of accuracy exists. Perhaps Christmas or summer assessments which are timed, consistent across the year group and consistently marked or standardised would be good.

Monitoring data should be reliable

Reliability across tests and departments

Reliable assessment data should be comparable in difficulty from one test to another. If you want to track a student’s progress at several fixed points through the year, the assessment data collected should be similar in complexity, time and rigour of marking. 

Reliable assessment data should be comparable from one department to another. Moreover, it should reflect the rigour of the subject. For example, if colouring in is assessed at GCSE, by all means assess it at KS3, but if not, you are simply padding out an exam with a low order and largely irrelevant skill.

I have worked in a number of schools and I have observed that English departments tend to be very thorough; assessments often ask students to write extended responses, perhaps in both reading and writing. Mark schemes are applied, sometimes with agonising levels of accuracy, and written assessment scores are often combined with speaking and listening marks collected through the term to reflect the percentages used at GCSE. Good grief! On the other hand, I have invigilated exams from other departments and have noticed some of the following: matching activities at KS3 (match the word to its definition), cloze exercises (fill in the missing word), colouring in activities (colour in the storyboard of a historical or religious story) or a task which is given one hour but all students have finished within 15 minutes. Again, good grief. It seems unreliable to base a student’s grade in history on their ability to colour in. 

In order to combat this, I have seen a few measures put in place. One school that I have worked in took half term assessment scores and standardised them within the school (I don’t know how this was done … some wizardry in Excel!). Reports were sent home as a series of numbers standardised around the average of 100 and parents were given support to interpret the scores. This particular school felt it was a more accurate measure than percentages, and made subjects directly comparable to each other and across the different assessments in the school year.

In another school that I have worked in, each department had to have their assessment task approved in advance by the SLT to ensure it was rigorous enough. They also insisted on a minimum of two but ideally three different assessment scores used to create an average, rather than relying on one test alone. This was never an issue for the English department, as we assess reading, writing and speaking and listening fairly consistently throughout each term. Another approach is to use a homework task as well as a timed task set in school to create a clearer picture of a student’s progress.

6 benefits of effective data tracking

1. Pitching learning at the right level

Examine your class and ask yourself some of the following questions:

  • What is the average ability of the class? Is it pretty much 100, or above or below? This gives you a starting point to pitch the level of complexity. If your class ability is roughly average, look at a B/C grade in a mark scheme for the year group to see what skills are needed to achieve those grades. There is your starting point for pitching the right level of challenge. If the class average is 110, look at the A grade in the mark scheme … what does that look like, and what skills do you need to embed?  Of course this is just a starting point, but it is useful.
  • What is the highest ability in the class? How can you stretch and challenge that student or group of students? What is an appropriate level of challenge for them? This helps you to differentiate upwards.
  • Likewise, what is the ‘tail’ like in the class? There may not be much of a tail, or there may be one student who is well outside of the pack – what is their ability? What supports can you put in place? Is there a large group who fall outside of the average?

2. Early identification and intervention to support struggling students

You have established your lower ability learners (see number 1, directly above). But are there other students with individual learning needs? Look over the student profile graphs again to identify uneven profiles and significant differences. Are there students to monitor closely, based on the graphs? Looking out for these students from the start of the year and intervening early can have a huge positive impact on their learning.

3. Targeted groupings to support differentiation

The data can help you to differentiate your learners (again, see number 1 above). Consider grouping by category i.e. groupings based on vocabulary scores for reading groups, or groupings based on verbal scores for speaking and listening activities.

Consider seating students in pairs of higher and lower ability to encourage peer teaching. This is, at the very least, a more interesting strategy than alphabetical order, if you don’t know the students well.

(For more on differentiation, click here.)

4. Monitoring of progress towards a target

As I have outlined in detail above, monitoring progress towards an agreed target is of huge importance in effective teaching and learning. The data helps you to track this progress, and identify any lack of progress. No surprises.

If you are an English teacher in Northern Ireland, check out this spreadsheet which is pre-formatted to allow you to track and predict grades with a high level of accuracy.

CCEA GCSE English Language results tracker for predicted grades spreadsheet - free teacher tools - ThinkLit
Spreadsheet for teachers to track results in CCEA's GCSE English Language course

5. Self-evaluation of your own teaching methods

Tracking data and measuring value-added gives you a clear and measurable sign of the successfulness of your own teaching and learning. There is obviously a balance to be struck between the students’ effort and the teachers’ skill in attributing the success of the outcomes. 

A note of caution: using value-added averages to compare teacher to teacher can be a dangerous game. Yes, there may be good practice to glean from a teacher with a high positive residual, but negative residuals should not be used as a stick to beat teachers with. The data is fascinating for a data geek like me, but it has its limitations and will only tell you part of the story. A well-known phrase springs to mind:

‘lies, damned lies, and statistics’

6. Removing the stress from report writing

Writing end-of-year reports can be an epic effort of commitment, perseverance and creativity. In my experience as a post-primary English teacher, report writing can involve writing hundreds of student reports. This year, I teach 7 different English classes, each with approximately 30 students. My maths tell me that’s 210 individual reports for a core subject that generally, parents care a huge amount about. 

Having a detailed spreadsheet with colour coding already embedded allows me to quickly, easily and exactly sum up the student’s progress. Data driven reports give credibility to the more personal statements I like to also include, if character-count allows. But the sting is taken out of deciding what to say when the data is all there in front of you. 

How to put data tracking into practice in your classroom and across your school

  • Get access to the baseline data. If it doesn’t exist, ask someone high up to consider it.
  • Get familiar with your classes: print the graphs and highlight significant differences and uneven profiles.
  • Identify your ability range, high and low scores, SEN needs and anything else you can glean before you get stuck into teaching.
  • Choose your end point learning goals based on the progress you want to see. Use mark schemes and the data to help you do this.
  • Collate assessment scores in your own spreadsheet throughout the year. Include everything and measure against the baseline.
  • Talk to other teachers – compare students and classes in terms of progress against targets.
  • Involve everyone – teachers, parents, classroom assistants, students … all invested parties should know the target, the starting point and the action steps needed to get there.
  • Evaluate yourself – measure your value-added. What have you done well and where do you need to focus next term/year? Be reflective, and encourage your students to be reflective too. 

This was a mammoth post – well done for getting to the end!

As always, I’d love to hear from you. Does your school use MidYIS, CAT4 or something similar? What best practice could you share? Is data tracking working for your department? Get in touch in the comments below, or drop me a message using the contact form. I’d love to hear from you.

What do substitute teachers get paid? 2024 Northern Ireland
What do substitute teachers get paid? 2024
What do substitute teachers in Northern Ireland get...
Teaching Interview Question and answers image of question marks
Teaching Interview Questions and Answers
As a teacher, one of the most important moments in...
Should teachers worry about ChatGPT? Image shows AI robot with programming language in foreground.
Should teachers worry about ChatGTP?
What is ChatGTP? If you have been listening to the...
What do teachers get paid? Teacher's pay scale and deductions
What do permanent teachers get paid? 2024
What do teachers get paid? Read on to find out the...
alexander-grey-eMP4sYPJ9x0-unsplash
PGCE English: is teaching the right job for you?
So you are in your final year of your undergraduate...
5 Ways to reduce your marking workload infographic: 1. print success criteria onto post-it notes. 2. guided peer assessment. 3: in lesson feedback. 4: self-assessment based on oral feedback. 5: use a visualiser.
5 Ways to Reduce your Marking Workload
Teachers’ marking workload 5 ways to reduce your...