Since OpenAI released its blockbuster bot ChatGPT in November, users have casually experimented with the tool, with even Insider reporters trying to simulate news stories or use it to message potential dates.
Its rapid adoption since then by some 100 million users in just its first two months is already changing how the internet will look and feel to users. With both Microsoft and Google incorporating generative AI into their search engines, it seems a matter of time before other websites adopt some kind of AI-driven interaction.
OpenAI’s new features announced in May could get us there soon enough. Users of its ChatGPT Plus subscription service will be able to use dozens of plug-ins for other websites, and a web-browsing feature that will let them access more current information than the old data set that ChatGPT was trained on.
To older millennials who grew up with IRC chat rooms — a text instant message system — the personal tone of conversations with an AI bot can evoke the experience of chatting online. But ChatGPT, the latest in technology known as “large language model tools,” doesn’t speak with sentience and doesn’t “think” the way people do.
That means that even though ChatGPT can explain quantum physics or write a poem on command, a full AI takeover isn’t exactly imminent, according to experts.
“There’s a saying that an infinite number of monkeys will eventually give you Shakespeare,” said Matthew Sag, a law professor at Emory University who studies copyright implications for training and using large language models like ChatGPT.
“There’s a large number of monkeys here, giving you things that are impressive — but there is intrinsically a difference between the way that humans produce language, and the way that large language models do it,” he said.
Chatbots like GPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while dispatching an encyclopedic knowledge.
Other tech companies like Google and Meta have developed their own large language model tools, which use programs that take in human prompts and devise sophisticated responses. OpenAI, in a revolutionary move, also created a user interface that is letting the general public experiment with it directly.
Some recent efforts to use chat bots for real-world services have proved troubling — with odd results. The mental health company Koko came under fire this month after its founder wrote about how the company used GPT-3 in an experiment to reply to users.
Koko cofounder Rob Morris hastened to clarify on Twitter that users weren’t speaking directly to a chat bot, but that AI was used to “help craft” responses.
Other researchers seem to be taking more measured approaches with generative AI tools. Daniel Linna Jr., a professor at Northwestern University who works with the non-profit Lawyers’ Committee for Better Housing, researches the effectiveness of technology in the law. He told Insider he’s helping to experiment with a chat bot called “Rentervention,” which is meant to support tenants.
That bot currently uses technology like Google Dialogueflow, another large language model tool. Linna said he’s experimenting with Chat GPT to help “Rentervention” come up with better responses and draft more detailed letters, while gauging its limitations.
“I think there’s so much hype around ChatGPT, and tools like this have potential,” said Linna. “But it can’t do everything — it’s not magic.”
OpenAI has acknowledged as much, explaining on its own website that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Read Insider’s coverage on ChatGPT and some of the strange new ways companies are using chat bots: