The AI Tsunami Is Here. Your Data Is the Seawall.
Everyone can now build. Not everyone can be trusted with what you hold. A provocation for leaders thinking about what comes after deployment.
Deployment used to be the hard part. Months of infrastructure work, environment configuration, release management ceremonies. Today, a designer exports a zip file from Figma and a junior engineer pushes it live before lunch. The tools have caught up. The constraint has shifted.
So what is the new hard part?
It is not the code. Code is increasingly cheap. AI can write it, refactor it, test it, document it. The new hard part is knowing — precisely and continuously — who is touching your data, what they are taking, and whether they should be.
The asymmetry nobody is talking about
Here is something worth sitting with: any team in the world can stand up an LLM agent today. The capability is shared. Open. Cheap. But your operational data — the cases your officers work, the disbursements your systems track, the assessments your staff make — that data belongs only to you. It is your moat. And right now, the wave of new tooling is eroding the fences around it.
We have seen this play out quietly in supply chain incidents across both public and private sectors. The breach is rarely dramatic. It is a misconfigured API. An overly broad service credential. A script written to "just pull a quick report" that ran longer than intended and extracted more than it should have. The data leaves slowly, then all at once.
Differentiated quality is not laziness — it is strategy
The instinct is to apply uniform rigour across everything. But that is not how engineering resources work in practice, and AI-accelerated delivery makes the tension more acute.
A smarter posture: treat quality as a spectrum, not a binary. The frontend is a rendering layer. It will be rewritten, probably more than once. Go easy. The backend is the contract between your systems and the world. Treat it like infrastructure. The data layer is the record of everything your organisation has ever done. Treat it like critical national infrastructure.
Frontend — accept churn. Deploy fast, iterate fast. Figma to production in a day is a feature, not a risk.
Backend & APIs — apply rigour. Code review. Scope controls. Change management. Every endpoint is a door.
Data layer — treat as sovereign. Schema changes require governance. Access changes require audit trails. Volume anomalies require investigation.
The questions your architecture should be able to answer
Here is a simple test. Can your systems, right now, answer the following?
Who accessed what data in the last 30 days — not which system, but which person, via which credential, touching which records? What was the volume? Was that normal for that person's role and workload? If an officer's behaviour changed — they started pulling significantly more records, or records outside their usual scope — would anyone know?
If the answer is "we would need to dig through logs manually," that is the gap. And as AI tools make it easier for officers to query and extract data with natural language, that gap becomes load-bearing.
Three capabilities worth building toward
None of these require buying something new. They require making a deliberate architectural choice to build them in.
First: scoped access tokens with an audit trail. If you want officers and systems to automate their own workflows — and you should, because the productivity gains are real — you need a way to issue credentials that are scoped to specific data, specific volumes, and specific time windows. A personal access token with no expiry and no scope is not automation infrastructure; it is an open door.
Second: behavioural observability at the data layer. You already have API logs. The question is whether you are reading them. Mirroring egress traffic — what data is leaving your services and to whom — gives you the raw material to build a baseline of normal access. Deviations from that baseline are your early warning system. This does not require a graph database or a machine learning model to start. It requires someone to decide it matters.
Third: a triage model for your systems. Not every system needs the same level of access scrutiny. A system that handles aggregated, anonymised data is lower risk than one that handles individual case records. Map your estate. Know which systems carry the highest sensitivity. Apply proportionate controls. This is not new thinking — it is risk management — but AI-accelerated deployment means the map needs to be refreshed more frequently than before.
The sustainability question
Managed agent platforms — and tools like them — promise to take development from prototype to production in days. That is probably true. What they do not promise is that what you built will be governable, auditable, or operationally sustainable six months later.
Ten times faster development means ten times more things running in production. Each one is a surface. Each one needs to be monitored, updated, and eventually decommissioned. The wave does not crest and recede. It raises the waterline permanently.
The organisations that will navigate this well are not the ones that build the fastest. They are the ones that build a governance layer fast enough to keep pace with what they are deploying.
If we issued personal access tokens to every officer today, how would we know what they accessed and whether they should have?
Do we have a behavioural baseline for data access — and would we know if something deviated from it?
Which of our systems carries the highest data sensitivity, and is its access model proportionate to that risk?
When we accelerate deployment with AI tooling, are we accelerating the governance layer at the same rate?
The AI wave is real. The productivity gains are real. But every wave that raises capability also raises exposure. The seawall is not the firewall. It is not the penetration test. It is knowing, with confidence, what your data is doing and who is doing it with.
That is an engineering problem. It is also a leadership decision.