Adobe announced a series of AI initiatives at its annual conference, Adobe Summit, on Tuesday, including a new set of tools that can generate images on demand using only text prompts.
The initiatives are part of a broad effort by Adobe, the software giant behind popular creative apps like Photoshop and Illustrator, to inject more artificial intelligence into its creative products. They include a new tool called Firefly that will allow users to create images just by entering text descriptions into the software.
Firefly is powered by a type of AI known as generative adversarial network (GAN), which has also been used by companies like OpenAI to build systems that can generate images, videos and speech. The tool is part of Adobe Sensei, the company’s AI platform that was launched in 2016 and that powers features across Adobe’s cloud services.
Firefly is similar to other generative AI tools like DALL-E and Stable Diffusion that have emerged in recent years and enable users to create novel and realistic images with minimal effort. However, Firefly is integrated with Adobe’s products and services, allowing users to access and edit the generated images within applications like Photoshop or Illustrator.
>>Follow VentureBeat’s ongoing generative AI coverage
“Generative AI is the next really important evolution in AI-driven creativity and productivity,” said Ashley Still, SVP of the creative product group at Adobe in an interview with VentureBeat. “Hundreds of millions of people are already using features that leverage AI throughout our products.”
In addition to Firefly, Adobe also introduced new capabilities for its customer data platform (CDP), a tool that helps organizations manage and analyze customer data. The CDP now supports generative AI for content creation and marketing optimization, using both Adobe’s own Sensei technology and that from partnerships with third-party providers.
Adobe takes direct aim at copyright issues with Firefly
Firefly is a new family of creative generative AI models being launched by Adobe, according to Still. The first model is focused on image generation and is designed to be safe for commercial use. She explained that the way the first Firefly model was built and trained is actually by leveraging Adobe Stock licensed images.
The rise of powerful generative AI models that can create images has ignited debates over copyright and ownership. Services like DALL-E and Stable Diffusion are trained on millions of existing images, raising questions about whether their users can claim ownership over images the AI generates. For example, there is a pending lawsuit between stock image vendor Getty Images and Stability AI, the creator of Stable Diffusion.
Concerns over ownership and copyright are being directly addressed with the launch of Adobe’s Firefly service. Still noted that the Adobe Stock service already has hundreds of millions of high-resolution images that are properly licensed. Going a step further, Still said Adobe has a plan to compensate Adobe Stock contributors for their images that are used as part of the Firefly service.
In the future, Still noted that the Firefly service will also enable individuals and organizations to train the model on their own custom images or styles.
Mastering generative AI with Firefly
Aside from the core image library on which Firefly has been trained, there are a series of other core components that enable the service.
Firefly integrates natural language processing (NLP) technology to be able to analyze and understand the user text prompt that is used to generate an image. The underlying Sensei AI models also include capabilities for understanding different creative styles that a user might want to utilize.
The actual image is created using a diffusion model, where Adobe’s service creates the image. Firefly also has a highly efficient upscaling model that takes the image from its generated size and makes it higher resolution.
Going a step further, Adobe is also aiming to help prevent bias and harm in the content Firefly generates. According to Adobe, the underlying AI models analyze both the prompts and the content to ensure Firefly generates a wide variety of images that represent a balance of cultures and ethnicities – as well as ensuring Firefly does not generate harmful images.
The ability to be transparent about how images are created is also part of Firefly. Back in October 2022, Adobe committed to transparency in the use of generative AI, with its Content Authenticity Initiative (CAI) standards. The CAI is an effort to have a content credential, which is basically metadata written directly into the content files that details how the image was created.
Adobe Experience Cloud gets a new Sensei
Adobe is also bringing new AI services to its Adobe Experience Cloud platform.
Still explained that the Adobe Experience Cloud is focused on marketers and helping them with business use cases. To that end, she noted that Firefly will be of use in generating images for campaigns. The company is also planning on adding text and creative copy generation in the future.
“Some of the models that we’ll be leveraging, in the near term, are the Azure Open AI service, and the open-source FLAN T5,” she said.
The overall goal for Adobe is continuing to integrate generative AI capabilities across its services to help end users — whether they are creative or marketing people — do their jobs effectively.
“What’s exciting is clearly the technology in this area just continues to evolve incredibly quickly and every day we have new ideas for how we can incorporate the technology into our products and services,” Still said.
The post Adobe bets on generative AI with ‘Firefly’ tool to create images from text appeared first on Venture Beat.