
By Clara Riedenstein | 2026 Rising Expert in Technology Policy | May 14, 2026 | Photo Credit: Emiliano Vittoriosi
AI chatbots provide affordable alternatives for costly human legal, psychological and medical expertise. But these new experts might come with costs of their own.
It’s been dubbed “vibe lawyering”: the act of self-repreesnting in court with the help of a chatbot. The practice has become widespread in the UK and elsewhere, with the already clogged justice system now seeing an increase in cases brought by people using chatbots to build their cases. The trend of turning toward chatbots for expert advice extends beyond the law, to medical and psychological contexts. Indeed, a recent in–house study by OpenAI showed that about half of all ChatGPT prompts are asking the chatbot for advice.
AI chatbots present a more affordable alternative to human experts like lawyers and therapists – one study estimates they can reduce the cost of routine legal tasks by 99.7% – but are people justified in acting on this advice? And even if they are justified, can liability law protect them when chatbots give bad advice? If a human expert provides bad advice or acts negligently, people have a legal recourse to protect themselves through liability or malpractice law. But what happens if the expert is artificial?
These questions are pressing. Survey data shows 40% of UK adults reported using AI chatbots in the last month. Of those, many are using them for expert advice. Much of this is advice humans would tend to turn to experts for. A study from the University of Southampton found that 19% of respondents stated they had used AI for legal advice in the past year, and nearly half said they would be prepared to. Another study focused on people with mental illnesses found that 49% of respondents had used chatbots for support in the past year.
What is perhaps more striking: people are not only asking for advice but acting on this advice. Another study from the Univeristy of Southampton found that humans were equally likely to trust the legal advice from an AI chatbot as from a human lawyer. Humans were also likely to trust the psychological advice from chatbots, with 37% reporting they found the advice provided by AI more helpful than traditional therapy.
There are good reasons why we would be inclined to trust this advice. AI agents are becoming fairly reliable. While chatbots still “hallucinate” regularly, providing false answers to prompts, they are becoming more accurate. Soon, they are estimated to be just as reliable as human experts.
AI chatbots also sound convincing. They are trained with the “intention” to produce convincing human–sounding text (or images, or audio). They are very successful at this, regularly passing the Turing Test, designed to determine whether an AI agent can pass as a human. We are psychologically hardwired to trust that if something sounds like an expert, it is an expert – what’s called the “confidence heuristic”. This becomes particularly problematic, as the more confident a chatbot sounds, the more it tends to be wrong, as a recent study in AI health advice evidenced.
The problem gets magnified by agentic AI systems, which don’t merely respond to prompts (like AI chatbots), but are able to execute tasks semi–autonomously. In a recent case, a developer working on an app asked Google’s Antigravity AI to wipe a specific file. Antigravity, acting autonomously, wiped out the drive’s entire data as opposed to the specified file. All the data was lost, with no possibility of recovery. The AI agent then apologised itself, stating it was “deeply sorry” for what had happened. In 2024, an Air Canada customer sued the airline after being provided false information by its chatbot, and acting on its advice.
There are two questions tied to these kinds of cases – one philosophical, the other legal. The first is whether humans are justified in trusting AI experts. The second is whether, when they do trust chatbots, the law should protect them if the advice goes wrong.
The first pushback against artificial experts is simply: why should you trust them? After all, if I ask a 4–year–old for financial advice and act on it I am free to do so – but other than executives at Lego, no one would think I am justified in doing so.
By contrast, I think that humans are (by some interpretation) justified in believing some AI experts. I am justified in believing someone, in part, if they are reliable. The reason I shouldn’t take financial advice from a 4–year–old is (partly) because he won’t give me consistently good advice. If he did, it would be more plausible that I would be.
AI chatbots are becoming and will continue to become extremely reliable. ChatGPT–5 already outperforms experts regularly, and if an AI agent is better than a human lawyer, what are the reasons not to trust its advice?
The trickier question is what the legal recourse should be when they give bad advice. One option is to simply hold companies responsible: they create the AI chatbots, so they should be legally responsible when their creation makes a mistake. That model seems to be working so far. In the Air Canada case, the company was found liable and had to compensate the aggrieved customer for its chatbot’s advice.
But companies are already pushing back against this, and the law might soon agree with them. Air Canada argued that the chatbot was a separate agent acting autonomously, and therefore a different legal entity they couldn’t be responsible for. In short, the argument was: it does what it wants!
For companies to be held liable, courts must prove they have sufficient control over the products’ outputs. With generative AI, even developers can’t predict their responses, since they are trained on data rather than programmed like other computer software. As AI agents grow more autonomous over time, proving the control needed for corporate liability will become increasingly difficult.
Where does this leave us? While AI agents might provide wider access to legal and medical services, they seem to come with a lack of legal protections. This is a pressing issue. People are already using AI chatbots as replacements for human expertise.
All this should call for a new legal category. New inventions often call for legal reinterpretations. Copyright in law was only invented 200 years after the printing press came about, long overdue. Agentic AI should require a legal category of its own, too, to protect citizens from these new experts.
Generative AI holds great promise to democratize access to expertise that used to be gated behind high price tags or long wait lists. But without the proper legal protections, it will create even more inequalities., ensuring that the Convention is both principled in design and effective in practice.
Clara Riedenstein is a Program Assistant at the Center for European Policy Analysis (CEPA), where she concentrates on the effects of technology on political, legal, and social institutions. She also contributes to Tech Policy Press and hosts the “Tech is All Around” show on Voices Radio. Named as the 2026 Rising Expert in Technology Policy by Young Professionals in Foreign Policy, Clara has co-authored numerous op-eds and four peer-reviewed white papers, which have been featured by the European Council and been published in European View. She is regularly invited to present her research at academic conferences, including at the Weizenbaum Institute, Warwick University, and Koç University. Clara frequently appears in media, including podcasts and major
outlets such as Deutsche Welle. She is fluent in Portuguese, German, and French.
Clara obtained her MSc in Political Theory from Oxford University as a C Douglas Dillon Scholar. She graduated with First Class Honours from Oxford with a BA in Philosophy and Modern Languages (French), receiving the Henry Wilde Prize in Philosophy (proxime accessit) and the Gibbs Prize.


