ChatGPT Image Dec 18, 2025, 08_22_12 AM

AI Isn’t Coming for Jobs. It’s Coming for Job Descriptions.

When a new technology shows up, the first reaction is usually fear dressed up as certainty. “It’s going to replace people.” “It’s going to break everything.” “It’s not realistic for government.” We’ve heard versions of that story before, and not just once. Think about the…

Share this post:

When a new technology shows up, the first reaction is usually fear dressed up as certainty.
“It’s going to replace people.”
“It’s going to break everything.”
“It’s not realistic for government.”

We’ve heard versions of that story before, and not just once.

Think about the ATM.
When ATMs first started showing up in banks, it wasn’t a small concern. People genuinely thought the bank teller was about to disappear. Why wouldn’t they? If a machine can handle withdrawals and deposits, what’s left for a person behind the counter?

But then the world adjusted.
The routine cash-handling work got faster. Lines got shorter. Banks could operate differently. Tellers didn’t vanish. They shifted toward more complex work: helping customers solve problems, opening accounts, dealing with exceptions, and handling the issues a machine can’t. In a lot of places, banks even opened more branches. The job didn’t go away. The job changed.

And it’s the same pattern we’ve seen over and over again.
People once worried automatic elevators would erase the role of elevator operators, whose whole job was manually moving people floor to floor. Instead, automation made buildings easier to navigate and shifted work toward maintenance, safety, and building operations.

People feared computers would eliminate the human “calculators” who performed thousands of manual computations by hand. Instead, computers freed people to focus on analysis, modeling, and strategy—work that simply wasn’t possible at scale before.

People thought automated switching would replace switchboard operators, who used to connect every single call by hand. Instead, automation made communication faster and more reliable, and the work evolved into customer support, network engineering, and telecom operations.

People assumed email would wipe out the need for mail clerks, whose job was to physically sort, process, and deliver messages. Instead, digital communication sped everything up and expanded roles in IT systems, cybersecurity, and digital operations.

And when rideshare platforms emerged, people expected traditional drivers to disappear. Instead, transportation got easier and more efficient for riders, and new roles emerged in safety, logistics, customer support, and platform management.

The work didn’t disappear.
Technology just reshaped it—and in almost every case, it led to better service, more efficiency, and more opportunity for people to focus on the parts of the job that actually require people.

That’s the right lens for AI in the public sector, too. The fear sounds familiar: that the work will disappear or the system won’t be ready.

But the real question isn’t whether AI replaces people. It’s which parts of the job it can relieve and what conditions must be in place for that relief to make the work better, safer, and more reliable.

The hard part isn’t the tool. It’s what the tool needs.

ATMs didn’t work because they were flashy. They worked because banking had the basics in place: standardized accounts, clear rules, reliable systems of record, and controls to prevent fraud.

AI’s the same way. If the underlying information is inconsistent, duplicated, outdated, or scattered across systems that don’t agree, AI doesn’t quietly struggle. It confidently produces messy outputs. And in government, messy outputs don’t just frustrate teams. They create risk.

That’s why AI quickly becomes a data issue. Before agencies can get real value from automation, copilots, or “ask a question and get an answer” experiences, they need a way to make their data usable across the organization.

Not “usable” in the abstract. Usable in the practical sense that matters day to day:

  • Different departments can look at the same topic and still land on the same answer
  • People don’t spend half their week reconciling conflicting reports
  • Leaders can ask a question and trust the response enough to act on it
  • Teams can apply consistent rules without rebuilding everything from scratch

That’s what a data intelligence approach really enables: a shared, governed view of the information that staff, systems, and AI can rely on.

The quiet truth: AI changes job descriptions before it changes org charts

Most public sector work is a mix of decision-making and repetition.

Decision-making is where public servants add unique value. It’s why you can’t “automate government” the way people pretend you can.

Repetition is what quietly eats capacity: searching for the right file, re-keying information, reconciling three versions of the same record, summarizing notes, routing requests, pulling data for a report that’s already outdated by the time it lands.

This is where AI makes the biggest difference. Not by replacing people, but by removing the work that keeps people from doing the work.

It can help teams:

  • Find the right information faster, even when it lives across multiple systems
  • Summarize long records and histories without losing context
  • Flag mismatches and duplicates before they turn into downstream errors
  • Reduce the “copy, paste, re-key, reconcile” cycle that burns time and patience

And when the underlying data is organized and governed well, those benefits compound. When it isn’t, they collapse.

Why this feels different in SLED

AI outputs in government don’t live in a vacuum. They connect directly to policy, eligibility rules, compliance, procurement requirements, public records, and constituent trust. Even when AI is only making a recommendation, the agency still has to stand behind the result and explain how it was reached.

That’s why the most practical use of AI in SLED usually isn’t handing decisions over to a system. It’s using AI to support the people responsible for the outcome by speeding up the parts of the job that slow everything down: finding the right information, summarizing long records, flagging inconsistencies, and applying rules more consistently. Humans stay accountable, but they’re not stuck doing all the manual work around the decision.

This is why data quality and governance become workforce issues

As AI starts taking on more of the repetitive work, the cost of bad data goes up.

If the underlying data is inconsistent, duplicated, outdated, or unclear, AI doesn’t just make the same mistakes faster. It amplifies the confusion. It can surface the wrong “answer” confidently, which erodes trust across the organization.

That’s why the path forward isn’t “buy an AI tool.” It’s building a data foundation that makes AI safe and useful: clear ownership, consistent definitions, reliable access controls, and a shared view of the information teams use to run the mission.

When people can trust the data, they can trust the outputs. When people can trust the outputs, they can actually adopt the change.

Try asking your team:

  1. Which parts of your job today feel repetitive or manual—and which parts actually move the mission forward for constituents?
  2. Where do you lose the most time each week: finding information, verifying it, or reconciling conflicting versions—and how does that delay affect the people you serve?
  3. If AI could remove one bottleneck that keeps constituents waiting, what would it be?
  4. What would you need to see or understand in an AI-generated answer to trust it enough to use it in a decision that affects someone’s eligibility, benefits, or experience?
  5. What’s one report, metric, or data source you rely on but don’t fully trust—and how does that uncertainty show up in constituent-facing work?

If a constituent or auditor asked, “Where did this number come from?” could we explain it in plain English without guessing?

Last updated: December 18, 2025

RedLeif_light_blue

Accelerating the Modernization and Security of Public Sector Data

No matter what islands you need help navigating in the public sector, RedLeif can help.

The public sector is uniquely made up of a diverse set of stakeholders – each  with their different rules, regulations, incentive structures, let alone all the acronyms. It’s as if there are independent stakeholder islands, each with really important functions, that are both completely foreign to each other and completely dependent upon each other.

Learn More