← Chua Jie Sheng
Public sector  ·  Perspective

The Government's AI Opportunity: Hire Young, Skill Up Everyone

The private sector is pulling back on fresh graduate hiring. Budgets are tighter, AI is doing more, and companies are being cautious. For most organisations, this is a problem. For the government, it is an opening.

A public sector perspective  ·  April 2026

The public sector has a structural workforce challenge that isn't going away: the average age of public servants is creeping upward, and AI knowledge is not yet evenly distributed. Many experienced officers are excellent at what they do — but they were never trained to think in terms of models, prompts, embeddings, or agentic workflows. That gap will compound if left unaddressed.

Here is what the government can do differently, right now.

Bring in Fresh Graduates as AI Carriers

Universities today are producing graduates who grew up with generative AI as a daily tool. They are not intimidated by it. Many of them built things with it during their studies. The private sector's reluctance to hire them is the government's gain.

Bringing in fresh graduates — not just as junior executors, but deliberately as AI knowledge carriers — serves two purposes. It lowers the average workforce age, which has long-term sustainability benefits. And it seeds AI fluency into teams that are currently building it top-down through training programmes alone. Bottom-up learning, through daily peer interaction, is often more durable.

This is not about replacing experienced officers. It is about pairing complementary strengths: domain knowledge and institutional memory on one side, AI-native instincts and technical familiarity on the other.

Treat Existing Public Servants as the Asset They Are

The risk with any "new talent" push is that it inadvertently signals to existing staff that they are being leapfrogged. That would be a mistake. Experienced public servants understand policy intent, stakeholder complexity, and operational constraints in ways that no fresh hire can replicate in the short term.

The goal should be to invest in them — deliberately and structurally. Not generic AI awareness workshops, but applied, role-relevant upskilling. A case manager should learn how AI can summarise case notes and flag anomalies. A procurement officer should understand how AI can assist in vendor evaluation. The learning should be tied to the actual job.

Where fresh graduates come in, pair them with seniors. Create conditions where knowledge flows in both directions.

Inject AI Capability Where It Sits, Not Just Where It's Asked For

One of the patterns that holds organisations back is treating AI as an IT problem. AI capability gets pooled into a central team, which then fields requests from the rest of the organisation. This creates a bottleneck — and more importantly, it creates dependency.

The stronger model is to inject AI knowledge into existing functional teams — policy, operations, communications, HR — so that each team can begin to own their own AI use cases. Central AI teams then shift to an enabling and governance role, not a delivery role.

This is harder. It requires more investment in people. But it is the only way to scale AI capability across a large organisation sustainably.

What the Centre Should Do: Clear the Runway, Not Just Open the Door

Central teams have a critical role — but the way enablement typically works in government is itself part of the problem.

In most structures, there are at least two hops between a central directive and the person who needs to act on it. From headquarters to the ministry or statutory board CIO. Then from there to the division or operational team. At each hop, there is interpretation, reprioritisation, and a fresh set of gatekeeping conditions. By the time enablement reaches the front line, it has often been diluted, delayed, or quietly deprioritised.

But here is what often gets missed: the bottleneck starts even before the AI tool. It starts with the device.

If an officer needs to seek approval for the device before they can access the tool, and seek approval for the tool before they can attempt the use case, and then route that use case through the division before it reaches the team — the white lane was never really open. It was just a promise, several approvals away.

This is why the white lane has to be blanket from the onset. The device access, the tool access, the sandboxed environment — all of it granted upfront, as a standing permission, not triggered by a request. Central issues the clearance once. Divisions inherit it automatically. No officer should have to justify their way into an experimentation space that was designed for them in the first place.

"Instead of we're still waiting for approval to start, the question becomes what did you try this quarter, and what did you learn?"

That shift — from permission-seeking to outcome-reporting — is where the culture actually begins to move. The centre's job is not to approve each attempt. It is to clear the runway so that attempts can happen at all.

The Window Is Now

The private sector will eventually correct course and compete aggressively for AI-capable graduates again. The government has a window — perhaps 18 to 24 months — where it can move deliberately and build something the private sector cannot easily replicate: a workforce that is experienced, mission-driven, and genuinely AI-capable at every level.

That combination is rare. It is worth building.