
Apple
Apple says it has found a way to improve its AI without sacrificing one of its core values: user privacy.
As rivals like Meta and xAI have advanced their AI by training them on user data, Apple’s own AI has faltered as the iPhone maker stuck to its creed that privacy is a “fundamental human right.”
Now, Apple is having a careful rethink about its approach to the user data it aggressively protects as it looks to play catch-up in Silicon Valley’s hottest field.
In a blog post published on Monday, the company said it was “developing new techniques” that would allow it to train its AI — called Apple Intelligence — without collecting “actual emails or text from devices.”
Apple’s plan is to use more synthetic data — a form of data generated by AI itself — and enhance it by comparing it to real-world data from users opted into the company’s Device Analytics program.
“When creating synthetic data, our goal is to produce synthetic sentences or emails that are similar enough in topic or style to the real thing to help improve our models for summarization, but without Apple collecting emails from the device,” Apple said in its blog post.
The company shared one example of what this looks like in practice.
First, it can create “a large set of synthetic messages on a variety of topics,” such as “Would you like to play tennis tomorrow at 11:30 am?” It said this is done “without any knowledge of individual user emails.”
The Apple device of an opted-in user then compares the synthetic emails to “a small sample” of recent real-world emails, checking for similarities. The synthetic emails with the greatest similarities to the real-world samples are the ones Apple then uses to train its AI.

Apple
Apple said it would soon start using this approach with opted-in users to improve email summaries.
Apple has already been using a technique called “differential privacy” to gain insight into how a product is used without tracking identifiable information for Genmoji, its custom emojis generated with its AI. It now plans to improve Apple Intelligence features such as Image Playground, Image Wand, and more using that technique.
AI needs quality data
It’s hard to understate the importance of data in making AI models.
AI labs at the forefront of development have relied on user data to train large language models capable of understanding a broad scope of human interests. OpenAI CEO Sam Altman describes data as one of three core resources needed to improve model intelligence.
But for Apple — a company that markets itself on strong privacy— putting data to use has been more complex than it might be for others.
In its blog post, Apple said its principles have, to date, ensured that it doesn’t use its “users’ private personal data or user interactions” when training its foundational models.
Building powerful AI with principles like these in place can be challenging, as AI models have the chance to be smarter when they have more detailed insights to work with.
Under these constraints, Apple has faced criticism over its AI rollout. In March, Apple delayed its overhaul of its AI assistant Siri — a rare move for a company known for its polished product roadmaps.
In January, Apple temporarily disabled AI summaries of news notifications following criticism from media outlets over concerns that AI generates factual errors.
Apple will hope that its new strategy will provide a much-needed boost to catch up to rivals who might pay less heed to data privacy.
The post Apple needs its AI to be better but wants you to know it still really, really cares about your privacy appeared first on Business Insider.