Each week, Quartz rounds up product launches, updates, and funding news from artificial intelligence-focused startups and companies.
Here’s what’s going on this week in the ever-evolving AI industry.
OpenAI launched its Deep Research AI agent this week that can “synthesize large amounts of online information” with its reasoning capabilities, as well as complete multi-step research. Deep Research is powered by a version of OpenAI’s upcoming o3 reasoning model.
“It accomplishes in tens of minutes what would take a human many hours,” OpenAI said about the agent, which is available through ChatGPT.
After a user gives the agent a prompt, it works independently to find, analyze, and synthesize information on the internet to generate “a comprehensive report at the level of a research analyst.” Deep Research can analyze text, images, and PDFs.
ByteDance unveiled an AI video generator this week called OmniHuman-1, which can generate realistic videos of humans using a single image and a motion signal, such as audio or video. OmniHuman-1 is a multimodal model, meaning it uses different inputs to generate the videos.
“Whether it’s a portrait, half-body shot, or full-body image, OmniHuman handles it all with lifelike movements, natural gestures, and stunning attention to detail,” ByteDance said.
OmniHuman-1 is still in the research phase and not yet available to the public.
Rideshare service Lyft (LYFT) and AI startup Anthropic announced a partnership this week to create AI-powered products for Lyft customers.
“This is to enhance the rideshare experience for its community of more than 40 million annual riders and over 1 million drivers,” Anthropic said in a statement.
Lyft has already deployed Anthropic’s Claude, via Amazon Bedrock (AMZN), in its customer care AI assistant to respond to support issues. Since the Claude integration, Lyft said its AI assistant has “reduced the average customer service resolution time by 87%.”
Additionally, Lyft will get early access to Anthropic’s AI models and technology for research testing, and its engineering organization will be trained by the AI startup.
Meta (META) shared its Frontier AI Framework this week, which outlines how it determines risk when deciding to release an AI model. The framework comes after its commitment last year at the global AI Seoul Summit.
The framework is focused “on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons,” Meta said. That way, Meta said it “can work to protect national security while promoting innovation.”
Some of the areas that the framework focuses on are: identifying potential catastrophic outcomes it can prevent, modeling how bad actors can misuse frontier AI, and defining thresholds for risk based on its threat modeling exercises.
Programmatic advertising platform, StackAdapt, announced a $235 million growth capital raise this week. The funding was led by Teachers’ Venture Growth, with participation from five other investors, including Intrepid Growth Partners.
The Canadian company uses AI and automation to provide advertising and marketing technology. The investment will go toward scaling its research and development, and expanding globally.
“The challenges marketing teams face are vast and evolving rapidly,” Vitaly Pecherskiy, co-founder and CEO of StackAdapt, said in a statement. “Much of the pressure to drive growth rests on their shoulders as they work to reinvent operations and discover new ways to reach customers effectively, profitably, and predictably. To help them stay ahead of the curve, we are relentlessly focused on building the most advanced, intelligent, and automated platform to make their success inevitable.”
The post OpenAI’s deep research, ByteDance’s video AI, and Meta’s AI framework: This week’s AI launches appeared first on Quartz.