- MAIN PAGE
- – elvtr magazine – People Would Trust AI In Government Before Letting It Be Their Doctor
People Would Trust AI In Government Before Letting It Be Their Doctor
Artificial intelligence has crashed the office party, and everyone from coders to CEOs is wondering who gets kicked out. Three years after ChatGPT mainstreamed generative AI, we wanted to know how real people — not founders, not pundits — feel about working alongside it.
So we surveyed 1,000 recent applicants to ELVTR’s professional courses in the UK and US. These are mostly white-collar workers in tech, creative and business roles: the people most likely to be using AI now, and most likely to be affected by it.
The results are a study in selective trust. Most respondents want doctors, judges, therapists and teachers to remain human-only — in some cases, by law. Yet they’re far more open to the idea of AI helping to run government. Many already lean on AI as a learning tool or productivity boost. And the people most convinced AI “could never” do their job are, tellingly, the ones who don’t use it at all.
Here’s what the data says about how workers are really thinking about AI in late 2025.
Where Workers Draw the Line on AI Jobs
Start with the blunt question: Which professions would you be willing to replace with AI?
The most common answer wasn’t “sales,” “HR,” or “software engineering.” It was: “None of these.” A slim majority — 52% of respondents — said they wouldn’t hand any of the listed roles entirely over to AI. Despite years of “robots are coming for your job” headlines, most people are not eager to automate whole professions out of existence.
Among those who are open to an AI takeover, a pattern emerges:
- Sales and retail roles are at the top of the “fair game” list. Around a quarter of respondents would be comfortable with AI sales reps or retail assistants.
- HR specialists, software engineers and graphic designers follow, with roughly one in five saying those jobs could be performed by AI.
- Law enforcement and healthcare sit at the other extreme. Only about 6% want AI “robo-cops.” And just 5% would trust an AI in the role of doctor or nurse.
When we flip the question and ask which professions should by law remain human-led, the boundaries become even clearer:
- Over 80% say healthcare workers and judges must stay human-only. “Judge GPT” is not a popular idea.
- Around three-quarters say the same about teachers and therapists.
- Politicians and senior government executives get less protection. Roughly one-third of respondents would not insist on humans in those roles.
In other words: if a job involves life, death, justice or children, people want a human in charge. For bureaucrats and paper-pushers, the door to the server room is much more open.
The Self-Delusion Gap: “AI Couldn’t Do My Job”
When the conversation shifts from other people’s jobs to “my job,” confidence levels jump.
Asked whether a well-designed AI could perform their job:
- About 51% say AI could handle some parts of their work — the drafting, number-crunching or routine tasks — but not replace them outright.
- Roughly 42% insist: “No — AI could not do my job.”
- Only 4.5% believe an AI would probably do their job better than they do.
That 42% is revealing. Nearly half of respondents are convinced their work requires something uniquely human that no model can match.
The twist is who, exactly, feels so irreplaceable.
Among people who never use AI at work, nearly 60% say AI could not do their job. They’re the most dismissive of AI’s capabilities — and also the least experienced with it. By contrast, among regular AI users, outright rejection drops: most of them acknowledge that AI could handle at least a meaningful slice of what they do, even if they’re not ready to declare themselves obsolete.
The pattern is reminiscent of the Dunning–Kruger effect: those with the least exposure to a tool are the most confident in dismissing it. The more you work with AI, the harder it is to maintain the illusion that your role is beyond automation.
For employers, this matters. If your non-AI users are also the ones most convinced they’re immune to disruption, you don’t just have a skills gap; you have a perception gap. Workforce planning, upskilling and change management will all be harder if a large chunk of employees are both under-tooled and over-confident.
Recommended courses
AI Became “Critical” In Record Time
The most quietly explosive number in the survey isn’t about fear or hype. It’s this: 16% of respondents say AI is already critical to their job — as in, “I’m not sure I could do my work without it.”
That’s roughly one in six professionals, just three years after the first general-purpose AI tools hit the mainstream. It’s hard to find another workplace technology that went from curiosity to mission-critical this fast. Email, spreadsheets, smartphones — all took years, even decades, to become truly indispensable. AI is compressing that curve.
The rest of the workforce is scattered along the adoption slope:
- 40% say losing AI tomorrow would have no impact on their work.
- Around 43% say it would have “some” impact — annoying, but survivable.
- Then there’s that 16% who describe AI loss as serious or “critical.”
You can read this as a story of limited dependence — “most people can still function without AI.” But it’s just as valid to read it as an early-warning chart: a meaningful minority of workers has already wired AI into their daily workflow so deeply that pulling it out would break things.
Usage patterns show how fast this is happening:
- Nearly 39% don’t use AI at work at all — yet.
- Among users, most still rely on AI for less than one-third of their tasks.
- A small but growing 3–4% say AI handles most of their work.
Still, a psychological dependence is starting to emerge. About a quarter of respondents say they feel at least occasional anxiety at the thought of losing access to AI tools; nearly 10% say they “straight-up freak out” at the idea. That level of fear exceeds the share who say their work would actually fall apart without AI, suggesting that for some, AI has become not just a tool but a psychological safety net.
On the risk side, reality is calmer than the headlines:
- 93% report no personal issue tied to AI use — no reprimands, no lost clients, no rescinded offers.
- About 7% have had problems: a warning for using AI where it wasn’t allowed, a client unhappy with AI-generated output, or a job offer reconsidered after an employer spotted AI-written work.
At the company level, roughly 15% say they’ve seen AI cause a major mistake — money lost, legal trouble, or public embarrassment. That still leaves 85% who haven’t lived through a serious AI failure, or work somewhere that barely uses it.
Put together, this isn’t a picture of total dependence or total chaos. It’s something more interesting — and more unsettling for leaders: a new kind of early-stage dependency curve. A meaningful slice of your workforce is already building their day around AI, long before your org chart, processes, or risk management have caught up. If one in six knowledge workers now feel they can’t function without these tools, the question isn’t whether that share will grow.
It’s: how fast — and how prepared you’ll be when it does.
Falling Behind the AI Learning Curve
AI isn’t just testing what we can automate; it’s testing how fast people can learn.
We asked respondents how well they feel they’re keeping up with AI’s rapid evolution in their field. Only about 45% say, “Yes — I can keep up.” Everyone else is either treading water or has stopped swimming:
- A solid 29% admit they’re “not even trying to keep up.”
- The rest are attempting to stay current but describe themselves as “barely keeping up” or already “falling behind.”
For employers, that’s a red flag. If more than half your workforce feels outpaced by a core technology, that’s not an IQ problem — it’s a leadership and training problem.
It also hints at a widening divide: between workers who see AI as a skill they must master, and those hoping to outrun it to retirement. Companies that invest in closing this gap will almost certainly pull ahead. Those that don’t may find an uncomfortable share of their staff stuck in “not even trying” mode while competitors build AI-literate teams.
(Disclosure: I run ELVTR, a live-online education platform, so I have a stake in how organizations address this. But the data here stands on its own.)
AI as Teacher, Not Just Tool
One of the sharpest debates in the AI era is whether these tools make us sharper or duller. Are we outsourcing our thinking, or enhancing it?
Among people who actually use AI, our respondents lean toward enhancement:
- 32% say their critical thinking has improved since they started using AI.
- Only 18% feel it has declined.
- The rest see no change or aren’t sure.
That’s nearly a two-to-one ratio in favor of “AI sharpened my mind” over “it turned my brain to mush.” It suggests that using AI effectively — writing good prompts, checking outputs, stitching together information — may itself be a cognitive workout.
The learning isn’t just abstract. When asked where AI has helped them gain new skills or knowledge, respondents pointed to:
- Writing and creative work (around 50% credit AI with improving their writing or content creation).
- Coding and engineering skills (30%).
- Foreign languages, history, cooking and personal finance (roughly 20% each).
- Even relationships and sex advice show up, though here only about 10% say they’ve learned something useful.
Overall, about seven in ten say they’ve learned something new from using AI tools. Only 31% say, “I haven’t learned anything new from AI.”
This reframes AI from a pure automation story to an education story. For many workers, AI is already functioning as a ubiquitous teaching assistant — explaining concepts, debugging code, helping draft documents and filling in knowledge gaps on demand. The challenge now is to ensure access and literacy are distributed widely, not just among the most proactive or tech-comfortable slice of the workforce.
Futures and Fantasies: Clones and AI Government
A couple of more speculative questions show where people’s imaginations are heading.
First, the AI clone. We asked: if it were safe and ethical, would you want an AI clone of yourself?
- About 20% said yes — they’d like a digital version of themselves to handle meetings, admin or maybe just to exist.
- Roughly 65% said no, they prefer to remain resolutely singular.
- The remaining 15% are unsure what to make of the idea.
A fifth of respondents open to cloning their own persona suggests a mix of pragmatism and ego. It’s part workplace fantasy (someone else attends the 8 a.m. call), part Black Mirror episode.
Then there’s government.
We asked whether respondents think AI could run a country or government department better than humans:
- 57% say no — governing is a human-only job.
- About 32% say “maybe some departments” could be run better by AI.
- 6.5% are ready to contemplate a full AI government takeover.
That means nearly four in ten can imagine at least parts of government being run more competently by algorithms than by current leaders. Given widespread frustration with politics, that might be less about utopian faith in AI and more about disappointment with the status quo.
Either way, these answers show the Overton window has shifted. We are not electing AI presidents yet, but large minorities are willing to at least entertain the idea that some public-sector functions might be handled, or constrained, by systems rather than people.
What Leaders Should Do With This
Taken together, this snapshot of AI perceptions in late 2025 tells a story of cautious, uneven adoption.
Workers are:
- Protective of roles that touch health, justice, therapy and children — and more relaxed about AI in bureaucracy and even parts of government.
- Confident about their own irreplaceability, especially if they don’t use AI — while regular users are more likely to concede that AI could do chunks of their job.
- Experimenting with AI as a sidekick and tutor, but only a minority truly depend on it to function day-to-day.
- Divided on their ability to keep up with AI’s evolution, with a significant share not even trying.
The good news: we haven’t lost our jobs, or our judgment, to AI yet. The bad news for complacent organizations: your competitors are already turning AI into a force multiplier for their people. Whether this technology makes your workforce more capable or more anxious will depend less on the models — and more on how you choose to lead.
Methodology
The findings in this article are based on a survey of 1,000 recent applicants to ELVTR courses in the UK and US, conducted in late 2025. Respondents are primarily white-collar professionals in tech, creative, and business-adjacent roles. The survey was administered online, with closed-ended questions about AI use, attitudes and perceived impact at work. Percentages are rounded to the nearest whole number where appropriate.