Download our new report - Whose data is it anyway? – and see why control of your data is becoming a competitive advantage in financial advice.

AI, security and the next abstraction layer in software development

Engineering

Read in 5 minutes

AI, security and the next abstraction layer in software development

There was a time when deploying software meant racking servers, configuring networks and understanding IT infrastructure. Then came cloud computing and DevOps. Suddenly, developers who didn’t deeply understand infrastructure could deploy production systems safely.

This shift was unsettling for some parts of the enterprise. Giving more people the power to deploy software also meant giving them the power to misconfigure infrastructure, expose secrets and accidentally create new attack surfaces.

The industry’s response, however, wasn’t to slow down innovation. Instead, we built opinionated platforms, guardrails and golden paths. Infrastructure-as-code, secure defaults, automated pipelines and platform engineering made it possible to scale safely. In effect, we solved the problem by making the safe way the easiest way.

The next generation of builders

With AI, tools are emerging that allow people to build software without deeply – or at all – understanding how code works. Entire applications can be scaffolded, APIs connected and infrastructure provisioned through AI assistance.

This feels like the next evolution of abstraction in software development. And it raises an obvious question: what happens when everyone becomes a builder?

We’ve seen this pattern before. When cloud computing emerged, many worried that giving developers infrastructure access would create chaos. In practice, the opposite happened. With the right platforms and guardrails, organisations became more secure, not less.

The same dynamic is likely to play out again. The answer won’t be restriction, but better platforms – ones that embed security by design and guide users toward safe defaults. In other words, golden paths for AI-native development.

Old security problems wearing a new hat

Despite the hype surrounding AI security, much of the underlying challenge is familiar. At its core, the game hasn’t changed. The principles that have always defined good security still apply – access control, secret management, reducing the attack surface, vulnerability management and the principle of least privilege.

Same problems, new hat.

What AI changes is the scale and complexity of the system. More people are building software. More APIs are being created. More integrations are being stitched together. More systems are being generated automatically. The players and the volume are new; the underlying problem is not.

This has implications beyond engineering teams. Historically, security awareness training has focused on passwords, phishing and endpoint guidance. In the future, a much broader group of people may need to understand basic security engineering principles – how to manage an API key safely, what least privilege means in practice and how integrations expand the threat surface. If more people are building, more people need a baseline understanding of how to build safely.

Security operations in the age of AI

If AI is empowering builders, it is also lowering the barrier for attackers. Even relatively unsophisticated actors now have access to tools that can accelerate parts of the attack lifecycle. For more advanced adversaries, AI reduces cognitive load by handling repetitive tasks such as enumeration, analysis and scripting, allowing them to focus on higher-impact techniques.

We are starting to see signals of this shift. Vulnerability discovery appears to be accelerating, with increased disclosures across widely used libraries and packages. AI-assisted security research is likely contributing to this trend, and as the tooling improves, new methods for identifying weaknesses will continue to emerge.

For defenders, this suggests the security arms race is entering another phase.

Security operations centres may also evolve in response. Much of today’s detection engineering relies on rule-based systems and queries written by analysts. AI could enable a move toward more flexible approaches, capable of identifying unusual behaviours, correlations and anomalies across large volumes of signals. As signal volume continues to grow, tools that can help teams prioritise what matters will become increasingly important.

Building again

On a more personal note, the rise of AI-assisted development has had an unexpected effect. As a senior manager, the amount of time I spend building things myself has diminished over the years. My role has shifted toward creating environments where teams can succeed rather than writing code directly.

AI-assisted tooling has started to change that. In the gaps between meetings, I can prototype ideas, experiment and build small tools that solve real problems.

For someone who started their career building products, that is a meaningful shift – a return to making things that are immediately useful.

We’ve tackled this kind of problem before

The arrival of AI in software development feels disruptive – and in many ways it is. But it also follows a familiar pattern.

Each new abstraction layer expands the pool of people who can build technology. That expansion introduces new risks. And each time, the industry responds in broadly the same way: by building better platforms, stronger guardrails and safer defaults.

AI development is unlikely to be any different.

For regulated firms, the challenge isn’t adopting AI – it’s doing so without losing control over how systems are built, connected and exposed.

That question of control doesn’t stop at infrastructure or tooling. It extends to data itself – who owns it, who can use it and how it moves through increasingly complex systems. It’s something we’ve been exploring more broadly in our recent paper, ‘Whose data is it anyway?’

As AI lowers the barrier to building, the organisations that succeed won’t be those that move fastest – but those that retain control over what gets built, how it behaves and where their data ultimately ends up.

About the author

Seb Coles is a director of engineering at Seccl, where he is responsible for IT operations, information security and cloud infrastructure.

  • AI, security and the next abstraction layer in software development

    AI, security and the next abstraction layer in software development

    Read post Engineering
  • Whose data is it anyway? (and why it matters more than you think)

    Whose data is it anyway? (and why it matters more than you think)

    Read post Financial advice
  • Meet the Secclers: Introducing Jess Spendlove

    Meet the Secclers: Introducing Jess Spendlove

    Read post Working at Seccl