AI’s use in the legal sphere is the gift that keeps on giving.
This time, it’s the sobering sense of disappointment that set in after a team building an AI chatbot for Alaska’s court system actually tested it and found out it was a hallucinating disaster, NBC News reports.
The chatbot, dubbed the Alaska Virtual Assistant, was designed to help people handle forms and other procedures involved in probate, the legal process of transferring a person’s belongings after their death.
In a predictable turn of events, instead of streamlining an already headache-inducing process inflicted on people who are probably mourning the loss of a loved one, the AI bungled simple questions and left most users feeling annoyed rather than supported.
Exhibiting a failing inherent to all large language models, the esteemed virtual assistant kept hallucinating, or making up facts and sharing exaggerated information, according to the people involved in its development.
“We had trouble with hallucinations, regardless of the model, where the chatbot was not supposed to actually use anything outside of its knowledge base,” Aubrie Souza, a consultant with the National Center for State Courts (NCSC), told NBC News. “For example, when we asked it, ‘Where do I get legal help?’ it would tell you, ‘There’s a law school in Alaska, and so look at the alumni network.’ But there is no law school in Alaska.”
And rather than finding it helpful, most people who tested it found it incredibly grating. The bot, unsurprisingly, suffered from the same character flaw plaguing most chatbots: being too sycophantic and cloying, feigning empathy and plying you with pleasantries instead of just getting down to business.
“Through our user testing, everyone said, ‘I’m tired of everybody in my life telling me that they’re sorry for my loss,” Souza said. “So we basically removed those kinds of condolences, because from an AI chatbot, you don’t need one more.”
Built in collaboration with Tom Martin, a lawyer who runs an a company called LawDroid that makes AI legal tools, the AVA AI has been trapped in development hell for over a year now, when it was “supposed to be a three-month project,” according to Souza. After lowering their expectations — and assuredly ironing out its horrendous flaws — AVA’s team says it’s finally ready for a public launch in late January.
“We did shift our goals on this project a little bit,” Stacey Marz, administrative director of the Alaska Court System and an AVA project leader, told NBC News. “We wanted to replicate what our human facilitators at the self-help center are able to share with people. But we’re not confident that the bots can work in that fashion, because of the issues with some inaccuracies and some incompleteness.”
“It was just so very labor-intensive to do this,” Marz added, despite “all the buzz about generative AI, and everybody saying this is going to revolutionize self-help and democratize access to the courts.”
More on AI: Judge Horrified as Lawyers Submit Evidence in Court That Was Faked With AI
The post Court System Says Hallucinating AI System Is Ready to Be Deployed After Dramatically Lowering Expectations appeared first on Futurism.




