Before anything else, a caveat. This blog reflects our experience of inspection in early January 2026. Different inspection teams will inevitably bring different emphases and styles, and – crucially – the new framework will ‘bed down’ over time as inspectors and school leaders become more fluent with the toolkit and the practical routines of the process. So this will be a changing picture: it will vary between schools, and it will evolve across the year.
With all that said, there may still be something worthwhile here – particularly for leaders who want to reduce uncertainty, strengthen readiness and avoid a few preventable traps.
1) “Secure fit” is not a slogan—it’s the operating system
For me, most immediate shift is the move from ‘best fit’ to ‘secure fit’. I had been to all the training on the new framework – but experiencing the new framework under the secure fit judgements was something else! Under the old model, a strong narrative could sometimes carry the day: you could tell a coherent story that joined up the messy reality of improvement into a persuasive whole. In short, I was still trying to lead the inspection using a ‘best fit’ mindset. This is where school leaders can feel blindsided by the new framework.
When inspection judgements are being made using a secure fit approach, school leaders need a different mindset. A line from the movie A Few Good Men kept coming to mind: “It’s not about what you know, it’s about what you can prove.” It’s a slightly dramatic way of saying something practical: the grade descriptors matter, and the evidence has to land against each bullet point.
That doesn’t mean your narrative is irrelevant. It means narrative is no longer the main vehicle. The main vehicle is evidence – especially evidence of initiatives that are embedded and can be seen in pupils’ experience and outcomes.
2) The pilot didn’t tell the whole story
Pilot inspections took place in a very particular context: schools volunteered, had high certainty, and could prepare with unusual focus. Now, inspections are returning through the Monday morning call, and inspectors are meeting schools in their natural habitat: busy, imperfect, mid-journey.
That has a big consequence. Schools that were previously ‘good’ under best fit may find secure fit uncomfortable when any element of a descriptor is not securely evidenced. Even where leaders feel the lived experience of the school is strong, the inspection conversation can tighten around ‘show me’ rather than ‘tell me’.
An example of this for us was when we were working with the inspection team to make a judgement under ‘Behaviour and Attendance’. Our behaviour across the school is very good, both learning behaviours in the classroom and behaviour during unstructured time in the playground. In addition, our attendance this year is much improved and in line with national. However, all this strength can be lost due to the bullet point in the grade descriptors about the three year trend in attendance being below the national average. No matter how strong the evidence may be against every other grade descriptor – not being able to evidence being in line with national against this one bullet point can drag down the overall judgement for all of ‘Behaviour and Attendance’.
It is this element of the ‘secure fit’ approach to making judgments that can make the process seem incredibly unfair to school leaders.
3) The SEF felt less central; the Toolkit felt essential
In our experience, the SEF was not the engine of inspection. It wasn’t requested in advance, and even though I sent it anyway, it didn’t feel like the reference point throughout the two days.
The School Inspection Toolkit – and the evidence-gathering guidance that sits with it – was the more dominant force. Leaders who know the Toolkit well have a genuine advantage, because they can anticipate the lines of enquiry and ‘place’ evidence where it matters most.
4) A practical pivot: SEF → Inspection Preparation Document
If I were redesigning my preparation for inspection knowing what I know now, I’d move from a traditional SEF to something closer to an Inspection Preparation Document (IPD), structured explicitly around the grade criteria.
For each judgement area, the IPD would set out:
- each bullet point for the grade you are aiming for (expected / strong for example)
- the evidence that demonstrates it
- the impact you expect to see as a result
- where that evidence sits (document / system / location)
- who inspectors should speak to (and why)
- any known gaps and what you’re doing about them
This isn’t ‘paperwork for Ofsted’. It’s a way of making your evaluation testable under secure fit—and, frankly, a way to lower cognitive load when you’re juggling multiple inspectors and multiple threads at once.
5) The SEF ‘story’ hasn’t gone away—it’s moved
If leaders want to influence how the school’s journey is understood, the key moments are:
- the 90-minute phone call, and
- the first learning walk(especially with the lead inspector)
That’s where context, improvement journey, current barriers, and what you’re building towards can be framed clearly. If those moments are under-used, later evidence can be interpreted in a harsher light because the inspection team hasn’t properly absorbed the ‘why’ behind what they are seeing.
6) The evidence is in the children
One of the biggest practical changes is where inspectors gather evidence about the quality of education. Direct observation of teaching seems now to be minimal. Instead, inspectors often remove small groups of pupils from class to speak with them in a quieter space, away from the performative nature of the classroom. In these small groups, they question pupils in depth about what they know, remember and can do as a result of what they have been taught over time. They may probe fluency and recall, ask pupils to explain methods, retrieve prior learning, apply skills to unfamiliar questions, and then triangulate this with pupils’ books and leaders’ explanation of curriculum sequencing.
This is, in many ways, a welcome recalibration. A one-off “great lesson” is not a reliable proxy for sustained teaching quality. What pupils can remember, explain and apply over time is a far better indicator of learning.
The implication is clear: inspection readiness is less about being watched and more about whether pupils – especially the most vulnerable – can demonstrate secure, cumulative learning and talk about it with confidence.
Inspectors also probed how the curriculum is adapted for pupils with SEND, and how pupils with high levels of need are meaningfully included in classroom learning (for example through well-planned workstations, tailored resources, and routines that enable participation rather than separation). Alongside this, inspectors may remove larger pupil groups for structured discussions about safeguarding, culture, and whether pupils feel safe and respected in school.
7) Capacity and cohesion are not ‘nice to have’
For a two-form entry primary, four inspectors on day one creates intense pressure. The practical expectation is that a senior leader can work alongside each inspector for much of the day.
In schools that are part of a trust, this can be shared – perhaps with members of the central team stepping in to pick up the slack. In stand-alone schools, it can stretch leaders to the limit – logistically and strategically. This is where SLT cohesion becomes protective. A calm, aligned team that can consistently ‘place’ evidence prevents inspection from drifting toward what is easiest to question rather than what is most secure.
8) External eyes can keep you steady
Staying razor sharp on your narrative and evidence base over the 2 days can be a challenge for even the most experienced leader. If you can bring in support from a trust leader, improvement partner, or trusted colleague during inspection, it helps. Not for ‘spin’—but for clarity. A fresh pair of eyes can spot gaps early, sharpen your narrative, and help you re-balance discussions when they tilt away from what you know is true of the school.
Practical things that mattered more than you’d think
A few specifics carried disproportionate weight in our experience:
- A demographic class list on day one (FSM/Ever6/SEN/EHCP/social care/attendance patterns) so case sampling can happen quickly and accurately.
- Phonics (Year 1)appearing to hold heavy influence within achievement, and by extension shaping perceptions of teaching and curriculum.
- Evidence of evaluation, not just activity: attendance by group, behaviour by group, enrichment participation by group, and a Pupil Premium approach that shows what worked, what didn’t, what was stopped, and what has changed as a result.
A final warning: recency bias
Under pressure, leaders tend to talk about what they’ve done most recently. That’s human. It’s also risky. If recent improvement is emphasised without the longer improvement timeline, inspectors may reasonably ask why action wasn’t taken earlier.
The fix is simple: tell improvement as a timeline, not a snapshot. Revisit previous plans, actions and adaptations so you can show the foundations that made current gains possible. Often, what looks “rapid” is only possible because of quiet, persistent work over years.
None of this is a definitive guide, and the framework will continue to settle. But if there’s one overarching lesson, it’s this: secure fit rewards schools whose evaluation is testable, whose evidence is retrievable under pressure, and whose pupils can demonstrate the learning that teaching has built over time.