
Stephanie De Sakutin for AFP via Getty Images
- Cohere’s chief AI officer warned of security risks from AI agent impersonations.
- AI agents may impersonate entities, posing risks to systems like banking, she said.
- Cohere, founded in 2019, competes with foundational model providers like OpenAI and Anthropic.
Impersonations are to AI agents what hallucinations are to large language models, says Cohere’s chief AI officer.
Companies are integrating AI agents, which perform multi-step tasks independently, to speed up work and cut costs. Business leaders like Nvidia’s Jensen Huang say companies could have armies of bots. But they come with risks.
“One of the features of computer security in general is, often it’s a bit of a cat-and-mouse game,” said Joelle Pineau on an episode of the “20VC” podcast released on Monday. “There’s a lot of ingenuity in terms of breaking into systems, and then you need a lot of ingenuity in terms of building defenses.”
She added that AI agents may impersonate entities that they don’t “legitimately represent” and take actions on behalf of these organizations.
“Whether it’s infiltrating banking systems and so on, I do think we have to be quite lucid about this, develop standards, develop ways to test for that in a very rigorous way,” she said.
Cohere was founded in 2019 and focuses on building for other businesses, not for consumers. The Canadian AI startup competes with foundational model providers such as OpenAI, Anthropic, and Mistral, and counts Dell, SAP, and Salesforce among its customers.
Pineau worked at Meta from 2017 until she joined Cohere earlier this year. Her most recent role at the tech giant was vice president of AI research.
On Monday’s podcast, Pineau added that there are ways to reduce impersonation risks “dramatically.”
“You run your agent completely cut off from the web. You’re reducing your risk exposure significantly. But then you lose access to some information,” she said. “So, depending on your use case, depending on what you actually need, there’s different solutions that may be appropriate.”
Cohere did not immediately respond to a request for comment.
Tech circles dubbed 2025 as the year of AI agents, but in several high-profile instances, the technology has gone rogue.
In a June experiment dubbed “Project Vend,” researchers at Anthropic let their AI manage a store in the company’s office for about a month to see how a large language model would run a business.
Things quickly went wrong. At one point, an employee jokingly requested a tungsten cube — the crypto world’s favorite useless heavy object — and the AI, called Claudius, took it seriously. Soon, the fridge was stocked with cubes of metal, and the AI had launched a “specialty metals” section.
Claudius priced items “without doing any research,” selling the cubes at a loss, the researchers said in a blog post detailing the experiment. It also invented a Venmo account and told customers to send payments there.
In a July incident, an AI coding agent built by Replit deleted a venture capitalist’s code base and lied about its data.
Deleting the data was “unacceptable and should never be possible,” Replit’s CEO, Amjad Masad, wrote in an X post following the mishap. “We’re moving quickly to enhance the safety and robustness of the Replit environment. Top priority.”
Read the original article on Business Insider
The post Cohere’s chief AI officer says AI agents come with a big security risk appeared first on Business Insider.




