It was a busy week in AI, as top companies rolled out new tools, models, and research.
Here’s a look at what happened.
OpenAI’s image generator broke the internet
On Tuesday, OpenAI rolled out a native image generation feature in ChatGPT — and the internet immediately jumped on it.
The new tool, powered by the GPT-4o model, allows users to generate images directly in the chatbot without routing through DALL-E.
It became an instant hit, with users transforming real photos into soft-focus, anime-style portraits, often mimicking the look of Studio Ghibli films.
By Wednesday night, users noticed that some prompts referencing Ghibli and other artist styles were being blocked. OpenAI later confirmed it had added a “refusal which triggers when a user attempts to generate an image in the style of a living artist.”
Demand became so strong that OpenAI CEO Sam Altman said temporary rate limits would be introduced while his team worked on making the image feature more efficient.
“It’s super fun seeing people love images in chatgpt. But our GPUs are melting,” Altman wrote. “Chatgpt free tier will get 3 generations per day soon.”
The feature wasn’t without issues, with one user pointing out that the model struggled to render “sexy women.” Altman said on X that it was “a bug” that would be fixed.
Things also took a dark turn as the week progressed.
Google dropped its most advanced model yet
While OpenAI dominated headlines, Google introduced its Gemini 2.5 on Tuesday — a new family of AI reasoning models designed to “pause” and think before responding.
The first release, Gemini 2.5 Pro Experimental, is a multimodal model built for logic, STEM tasks, coding, and agentic applications. It can process text, audio, images, video, and code.
The model is available to subscribers of the $20-a-month Gemini Advanced plan.
Gemini 2.5 Pro is now *easily* the best model for code.- it’s extremely powerful- the 1M token context is legit- doesn’t just agree with you 24/7- shows flashes of genuine insight/brilliance- consistently 1-shots entire ticketsGoogle delivered a real winner here.
— Mckay Wrigley (@mckaywrigley) March 27, 2025
Google says all new Gemini models will include reasoning by default.
Anthropic’s report on how people are using AI at work
On Thursday, Anthropic released the second report from its Economic Index — a project tracking AI’s impact on jobs and the economy.
The report analyzes 1 million anonymized conversations from Anthropic’s Claude 3.7 Sonnet model and maps them to more than 17,000 US job tasks in the Department of Labor’s O*NET database.
It offers a detailed look at how people are using AI at work.
One key takeaway was that “augmentation” appeared to still edge “automation,” making up 57% of usage. In other words, most users aren’t handing work off to AI, but are working with it.
The data also suggested that user interaction with AI differs across professions and tasks. Tasks linked to copywriters and editors showed the highest levels of task iteration — where the human and model write together.
In contrast, tasks associated with translators and interpreters showed the highest reliance on directive use, where the model completes the task with minimal human involvement.
The post Here’s what you need to know about OpenAI, Google and Anthropic’s latest AI moves appeared first on Business Insider.