
ChatGPT and other artificial intelligence models are becoming more popular in our society, and while they can certainly be helpful in some situations, they are still in their infancy, and taking their information at face value can be problematic for a number of reasons.
While AI is expected to help streamline some legal processes, these large language models (LLMs) should not be used in place of a lawyer. However, new research shows that many people find that AI answers are more trustworthy than a legal professional’s, and that’s concerning. Let’s dive into that research in today’s blog on AI and criminal defense.
AI Or A Lawyer?
The study titled “Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM” took a closer look at how normal people interpreted advice from a lawyer and an AI chat model with a few different studies.
The first explored which legal advice people relied on when they didn’t know whether it was being provided by ChatGPT or a real lawyer. Interestingly, when people didn’t know whether the advice was coming from a lawyer or an AI model, they found that participants were more willing to rely on the AI-generated advice. Given the fact that AI models are known to predict so-called “hallucinations,” which are outputs containing inaccurate or nonsensical content, it means that people may end up trusting AI legal answers, even if the answers aren’t actually rooted in truth. Researchers believed that participants gravitated towards AI advice because it oftentimes used more complex language, perhaps conveying more intricate understanding, whereas real lawyers tended to use simpler language in part to ensure clients understood what was being said, perhaps to their detriment.
The second study explored whether participants would flock to the lawyer if they knew which advice came from a professional and which one was given by an AI model. Researchers found that even when participants were told which advice came from a lawyer and which was AI-generated, people followed the AI advice just as much as the lawyer advice. This is a bit concerning.
The third study investigated whether study participants could tell the difference between advice from a lawyer or an AI model when asked to determine which advice came from which group. Researchers found that participants were able to correctly assess whether a lawyer or an AI model gave the advice 59% of the time.
The study is interesting and highlights some of the potential issues of over-relying on AI language models that are still in their infancy. Using ChatGPT like a Google search or in place of a lawyer who spent decades of their life learning the law could cause major problems in the event the advice you take from the AI model isn’t perfect. Don’t get us wrong, lawyers make mistakes too, but there are repercussions for lawyers who fail their clients in this way, and there is no such system in place to reprimand an AI model that spits out bad advice to someone in a legal bind.
For now, continue turning to a real lawyer if you need specific advice about a case or just general information about a legal issue. We’re here to help in any way we can. Reach out to the team at Appelman Law Firm for answers to your legal questions today by calling (952) 224-2277.





