DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

A.I. Complicates Old Internet Privacy Risks

February 26, 2026
in News
A.I. Complicates Old Internet Privacy Risks

This month, a federal judge ruled that a man’s conversations with Anthropic’s Claude chatbot were not protected by attorney-client privilege even though he had used the chatbot to prepare to talk with lawyers.

Two weeks ago, Ring, the Amazon-owned maker of doorbell cameras, provoked widespread outrage when it aired a Super Bowl ad showing how artificial intelligence could be used to find lost dogs. Critics quickly noted that it could also be used to monitor an entire neighborhood. The company has been on an apology tour ever since.

And over the past week, news surfaced that OpenAI, the company behind ChatGPT, had been aware of a British Columbia woman’s interactions with the chatbot and considered reporting her to the authorities months before she committed a mass shooting.

While OpenAI faces questions about whether it should have been more proactive about reporting what she wrote, the incident highlighted the possibility that A.I. companies will be under more pressure to share private chat logs with the authorities.

At the center of these headlines was generative A.I., the technology popularized by chatbots that is creeping into the everyday tools that people use to search the web, write essays and code. The steady cadence of news reports related to consumer privacy raises questions about whether A.I. has exposed people’s personal information more than it had before.

The reality, privacy experts say, is that the risk associated with sharing data with tech companies is roughly the same as it always was. Almost any data sent to a company’s servers could be potentially accessible by employees, government agencies, lawyers or criminals who have obtained data through loopholes and security breaches.

But the intimate nature of conversations with a chatbot adds a new twist to an old problem: People are sharing much more than they once did. Unlike a traditional web search tool, chatbots invite people to type complete thoughts and follow-up questions, revealing their intentions much more explicitly.

“The issues are, in many cases, the same, but it’s a way of interacting with technology that previously hadn’t been done,” said Chris Gilliard, an independent privacy scholar in Detroit. “When that happens, people need to be rewired in terms of understanding what the threats and harms are.”

It’s a lesson that internet users have had to learn and relearn. About eight years ago, Meta, formerly known as Facebook, came under fire when news broke that Cambridge Analytica, a political consulting firm, had inappropriately hoovered up the data of 87 million Facebook users.

It was a watershed moment that made people reassess whether to share their personal data online. But it was a lesson that already seems forgotten in the era of chatbots, which people are turning to for help with work, therapy and even genuine companionship.

OpenAI said it gave people control over how their data was used, including an option to use temporary chats that do not log conversations in ChatGPT’s history.

(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

Anthropic said it complied with applicable laws and may be required to share information when presented with valid legal requests.

In the incident involving Jesse Van Rootselaar, the person the authorities identified as the shooter in British Columbia, OpenAI said it had banned her account in June after her messages to ChatGPT automatically triggered an internal review, which included investigations by staff members. The company said it had opted against sharing information with law enforcement after determining there was no evidence of imminent planning by the user.

“Protecting privacy and safety in ChatGPT both matter, and we prioritize safety when there’s credible and imminent planning of real-world harm,” OpenAI said in a statement. “Our automated systems escalate critical situations, like threats to life or serious harm to others, for limited human review to take necessary action.”

While it was unremarkable that OpenAI, like many companies, had a system in place to monitor for abuse of its service, the incident is likely to create discussion among legal experts about whether A.I. companies should be held liable for the conversations that users have with chatbots and at what point it’s necessary to share data with law enforcement, said Jennifer Granick, a lawyer who focuses on surveillance and cybersecurity for the American Civil Liberties Union.

Section 230 of the Communications Decency Act generally shields internet companies from liability for content posted by users on sites like Facebook, but it’s unclear whether those policies should similarly apply to chatbots since the conversations are different from posts on platforms, Ms. Granick added.

“We’re going to start seeing more litigation to flesh this out about what responsibility to report looks like to law enforcement,” she said.

The story involving Anthropic’s Claude chatbot illustrates how confusion can arise when people treat a commercial chatbot as a note taker and research tool. A federal judge decided that prosecutors could have access to the transcripts for a man accused of wire fraud who had chatted with Claude to prepare to speak with lawyers. The judge’s rationale was that Claude was not a lawyer and thus not protected by attorney-client privilege. The judge’s decision has lawyers nationwide buzzing because it underscored the potential pitfalls of using A.I. compared with older tools.

In contrast, if a defendant jotted down notes and shared the memo only with a lawyer, that could be protected by attorney-client privilege, said Laura Riposo VanDruff, a partner at the law firm Kelley Drye who focuses on consumer privacy and data security. Because communications with a chatbot are stored on a company’s servers, they may not be legally protected.

Anthropic noted that the defendant trying to protect his conversations with Claude transcripts through attorney-client privilege had provided the transcripts directly to the court when federal agents seized his devices, meaning Anthropic did not share the data.

And in the incident involving Ring, the company’s Super Bowl ad had depicted the owner of a lost dog using a feature called Search Party, which uses A.I. and images from a network of home surveillance cameras to track down the missing pooch. The company clarified in an interview that Ring camera owners had to agree to share information through a Search Party request.

The security risks with using A.I. could grow as companies push for A.I. assistants to evolve into so-called agents that require access to virtually all of a person’s data on a computer or smartphone to offer help. Google and Microsoft have released these types of software tools in the last two years, and the rest of the tech industry is expected to follow suit.

Google’s Magic Cue, a software tool released for the company’s Pixel smartphones last year, can dig into a person’s email, for example, to look up a flight itinerary and write an automatic text message to a friend asking for arrival details. Microsoft’s Recall, which debuted on newer Windows machines, took screenshots of everything a user did to help with looking up important files or details discussed on a video call.

A butler who constantly asked for permission to view data from a user’s calendar, email, text messages and other apps to offer help throughout the day would be disruptive. So in designing agents, companies are asking for permission to have access to all their personal data just once.

Meredith Whittaker, the president of the Signal Foundation, the nonprofit behind the Signal private messaging app, underlined the potential dangers of an agent’s unfettered access to a person’s data.

The unintended consequence is that communications meant to be confidential, such as a person’s encrypted Signal messages, could be inadvertently breached through the A.I. systems by malware and eventually leaked, Ms. Whittaker said.

“If they’re going to act fully on your behalf, they have to know everything,” she said about A.I. agents. “We need to be much more discerning. Where do these types of automations actually bring us benefits, and where are they too dangerous?”

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix, a column about the social implications of the tech we use.

The post A.I. Complicates Old Internet Privacy Risks appeared first on New York Times.

3 Books to Read If You’re a Lover Girl
News

3 Books to Read If You’re a Lover Girl

by VICE
February 26, 2026

As a lover girl myself, I’m often drawn to books that make me feel less alone in my “big feelings.” ...

Read more
News

I traveled over 8,000 miles on Amtrak in one year. These are the 10 items I never board the train without.

February 26, 2026
News

A Japanese City Received 21 Gold Bars With Instructions: Fix Your Water Pipes

February 26, 2026
News

Noguchi Envisioned a More Open New York. New York Wasn’t Interested.

February 26, 2026
News

Todd Blanche tied to new missing Trump-Epstein sex assault interviews: ‘Such a cover-up’

February 26, 2026
My Son and His Wife Fight Dirty. Should I Get Involved?

My Son and His Wife Fight Dirty. Should I Get Involved?

February 26, 2026
How Disney’s Villains Land Is Getting Reimagined Under Josh D’Amaro | Exclusive

How Disney’s Villains Land Is Getting Reimagined Under Josh D’Amaro | Exclusive

February 26, 2026
The Thrill Seekers Who Take Subway Trains for Joy Rides

The Thrill Seekers Who Take Subway Trains for Joy Rides

February 26, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026