The questions training and skills CTOs aren't asking — but should be

On proximity, platform risk, and why “we have a team” is sometimes the least useful thing you can say.

Let me start with something that probably sounds counterintuitive.

Having an in-house development team can sometimes make it harder to move your platform forward. Not because the team isn’t capable. Not because they’re not working hard. Usually, it’s because of what happens when people spend long enough building and maintaining the same thing.

They get close to it. And close changes how you see problems.

I’ve spent the last six years building software for training and skills organisations — workforce development platforms, compliance systems, CPD portals, awarding body integrations. The organisations I work with aren’t really “EdTech” in the classroom software sense. More often, they’re organisations like ECITB, supporting tens of thousands of workers across the energy industry, or Team Teach, delivering behavioural management training across education and care settings. Complex, regulated, operationally messy environments where the technology matters more than most people realise.

What I've noticed, consistently, is this: the organisations that are most confident about their technology tend to be the ones who've thought least critically about it. And the ones with the most to gain from external perspective are often the ones least likely to look for it.

This piece is for the CTOs and technical leads in that space. It’s just a set of questions I think leaders in the sector should be asking themselves.

The proximity problem

There’s a concept in product development called the “curse of knowledge.” Once you understand something deeply — how it evolved, why decisions were made, where the compromises sit — it becomes very difficult to see it the way someone new would.

The workarounds stop feeling temporary. The friction becomes normal. Architectural decisions made under pressure three years ago quietly turn into permanent constraints because everyone has adapted around them.

Your team built the platform. They know where the complexity lives. They remember which integrations became more important than originally intended, which modules are carrying more operational weight than they should, and which parts of the codebase nobody really wants to touch unless they absolutely have to.

That knowledge is incredibly valuable. But it also creates blind spots.

The people closest to the system are not always the best placed to identify where it’s becoming difficult to evolve, or where risk is quietly accumulating underneath the surface.

Technical debt rarely announces itself dramatically. It builds slowly in the space between “good enough for now” and “we’ll deal with that later.”

Forrester research from 2025 found that more than half of UK technology decision-makers expected their technical debt to reach moderate or high levels, rising to three-quarters by 2026. That probably doesn’t surprise most CTOs. Many teams already feel this instinctively. What’s more interesting is what sits underneath it: teams that are too close to the problem to explain it clearly to the wider business, and business leaders who don’t quite have the language to challenge it properly.

So the platform keeps running. The team keeps delivering. But over time, the gap between what the business needs and what the platform can realistically support gets wider, usually without anyone noticing immediately.

The three questions

I mentioned three questions in the newsletter. I want to go deeper on each one here, because the surface question is rarely the interesting one.

1. When did your team last ship something genuinely new?

The easy version of this question is about output. The harder version is about what type of output.

Most mature product teams in the training and skills sector spend the majority of their time maintaining existing systems. More integrations, more edge cases, more client-specific requirements, more operational complexity.

It’s not unusual for teams to be spending 70–80% of their time simply keeping the platform stable and moving.

That isn’t bad management. It’s what naturally happens when the same team is responsible for both running a platform and evolving it.

Eventually, the operational work starts crowding out the strategic work.

The better question isn’t just “when did we last release something new?” It’s this: if we wanted to build something genuinely meaningful in the next twelve months — a new learner experience, AI-assisted recommendations, employer-facing analytics, real-time competency tracking — what would actually need to change internally to make that possible?

What gets deprioritised? What stops happening? Where does the capacity come from?

If those questions are difficult to answer, that’s useful information in itself.

And the AI shift makes this more urgent than it used to be. Learner expectations are changing quickly. Employers are starting to expect more personalised pathways and better visibility into skills development.

Awarding bodies are beginning to explore AI-assisted assessment and verification models. Historically, the training sector hasn’t moved especially quickly on platform innovation, which means the organisations willing to move now have an opportunity to create real separation.

But innovation requires capacity, and maintenance-heavy teams rarely have much spare.

2. If your three most senior developers left tomorrow, what would actually happen?

This is really a question about dependency.

Most technical leaders will say their systems are documented, their repositories are organised, and their processes are mature.

Often, that’s true. But documentation usually explains what the code does. It rarely captures why it ended up that way.

The workaround created because of a strange third-party API behaviour. The client requirement that never formally made it into the specification but shaped the architecture anyway. The historical decisions everyone still works around because changing them now would be risky or expensive.

That context lives in people.

And when people leave — through burnout, redundancy, better opportunities, or simply wanting something different — that knowledge leaves with them.

I’ve seen platforms become heavily dependent on one or two individuals without anybody intending for that to happen. Not because they’re gatekeeping, but because complexity naturally centralises over time. The “bus factor” — admittedly a fairly grim term — becomes lower than leadership realises.

The follow-on question matters just as much: if a capable developer joined your team next Monday, how long would it take before they could confidently contribute to the core platform?

Not carefully making supervised changes. Actually contributing.

If the answer is measured in months rather than weeks, the complexity is probably starting to work against the business rather than for it.

This matters beyond resilience planning. Platforms that are difficult to onboard into are also difficult to scale around. Every external contractor, agency partner, or permanent hire carries an onboarding cost attached to the platform itself.

3. Is your roadmap being driven by business priorities, or by available capacity?

This is the one that tends to hit home most.

There's a version of roadmap planning that looks like strategy but is actually triage.

You look at what the business wants to achieve. Then you look at the team size, the delivery pressure, the operational workload, and the technical constraints. And slowly, the ambitious work starts slipping further into the future.

The employer-facing functionality sales has been talking about for over a year.

Accessibility improvements everyone agrees are important but nobody formally prioritises. Integrations that would unlock meaningful client value but require dedicated focus time that never materialises. AI initiatives that permanently sit “in discovery.”

The issue isn’t that these things exist on the roadmap. It’s that the roadmap is increasingly being shaped by current delivery capacity rather than long-term business direction.

That’s not a criticism of the team. Most teams are already operating at capacity. The problem is structural.

A roadmap driven entirely by delivery capacity eventually stops functioning as a roadmap. It becomes a queue of compromises.

The more useful question is: what does the platform actually need to become over the next three to five years, and what would need to change internally for that to happen?

That’s a different conversation entirely. And usually, it requires some level of outside perspective to answer honestly.

What “good” actually looks like

I want to be careful here not to make this into something it isn't. The answer to all of the above isn't 'hire an agency.' Sometimes the answer is to restructure how the existing team works. Sometimes it's to bring in a specific technical specialism permanently. Sometimes it's to reduce scope.

But the organisations I've seen handle this well share a few things in common.

First, they’re honest about what their teams are optimised for. A team responsible for running a large compliance-heavy platform is probably excellent at operational stability, deep domain knowledge, and navigating regulated environments.

That same team may be less suited to rapid experimentation or building entirely new product directions alongside day-to-day operational responsibility.

That isn’t criticism. It’s clarity.

Second, they treat external support as a multiplier rather than a replacement. The strongest partnerships I’ve seen happen when internal and external teams are approaching the same problem from different vantage points. One side brings organisational context and domain expertise. The other brings distance, fresh patterns, and focused delivery capacity.

And importantly, they don’t wait for a crisis before acting.

The organisations that struggle most are usually the ones that bring in outside support after something has already broken. At that point, timelines are tighter, pressure is higher, and good decision-making becomes much harder.

A final thought

The training and skills sector is at an interesting inflection point.

AI is changing learner expectations faster than most platforms are evolving. Compliance and accessibility requirements are getting more demanding. The employers funding the training are getting more sophisticated about what they want to see from the technology.

The organisations that will be well-positioned three years from now probably won’t be the ones with the biggest teams or the longest feature lists.

They’ll be the ones with the clearest understanding of what their platform is for, what their internal team is genuinely best at, and where outside perspective could help them move faster or think differently.

That kind of clarity is difficult to achieve entirely from inside the system itself.

Sometimes you need enough distance to see the shape of the thing properly.

Let's chat

We'd love to learn about how we can help your organisation get results.

Other blog posts

See all

Let's chat

We'd love to learn about how we can help your organisation get results.