Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Adobe isn’t letting the fracas from earlier this year over its Terms of Service get in the way of introducing new generative AI features.
The company today announced a series of gen AI upgrades to Photoshop and Illustrator, its Creative Cloud desktop software programs for editing images and creating them, respectively, powered by its underlying Firefly 3 AI model.
We haven’t tested them yet ourselves, but Adobe’s Principal Designer for Machine Intelligence & New Technology Brooke Hopper and Principal Director Evangelist Paul Trani showed off some demos to us in a video conference ahead of the news last night.
To my casual/”pro-sumer” perspective (I use Adobe Creative Cloud, namely Photoshop, for editorial work at VentureBeat and other projects), the features seem incredibly useful and time-saving. Here’s a run down:
Generative fills, font matching, text conversion and automatic transformations in Illustrator
Adobe Illustrator is now equipped with several cutting-edge features aimed at enhancing productivity and creative control:
- Generative Shape Fill (beta): This tool allows designers to add detailed vectors to shapes by simply typing in text into a new entry field (seen in the promo image provided by Adobe below). Driven by the Firefly Vector Model, it promises increased speed, variety and yet retains precision by keeping the new fill imagery crisp and sharp and within the boundaries of any pre-existing/pre-drawn shape.
- Dimension Tool: This helpful gen AI feature lets Illustrator users simply drag their cursor between two points of a design — say a package or product — and get the actual dimensions displayed above them, even handling tricky areas such as corners and curved flags and cut-outs
Hopper said the feature would help designers avoid the so-called “Amazon effect…when you order something on Amazon and you get it and it’s definitely not the right size.”
- Mockup (beta): Facilitates the creation of high-quality visual prototypes by adjusting art to fit real-life objects, such as product packaging and apparel.
This feature personally blew me away in the demo, as it lets Illustrator users drag logos and other designs onto any other product in a 2D image, say, a baseball hat or mug or flexible food package, and it automatically, nearly instantaneously transforms the logo to fit the contours and shape of the underlying product.
It even works on “an image which has as little information as possible…no horizon line, no background colors, nothing,” said Trani as he demoed the feature for us.
Illustrator also adds:
- Text to Pattern (beta): Quickly generates customized vector patterns from text prompts, streamlining the creative process.
- Style Reference: Allows for easy editing and scaling of vector graphics, including subjects, scenes, and icons, in a designer’s unique style.
- Enhanced Selection Tools: Improve precision and efficiency when selecting objects, particularly in intricate and crowded designs.
- Retype is a new feature that automatically detects text within an image — even if it was not added in Illustrator or a text field — and tries to match the font. It can automatically turn text in static images into editable text in a text box/field in that matching font, so you could, say, grab a still from “Star Wars” and suddenly retype the title to be whatever you wanted for a parody.
Photoshop adds
Photoshop users can look forward to a range of new tools designed to simplify complex edits and empower creators:
- Selection Brush Tool: Makes selecting, compositing, and applying filters more intuitive, enhancing overall workflow.
- Adjustment Brush Tool: Applies non-destructive adjustments to specific areas of images, providing greater control over edits.
- Generate Image: Announced earlier, this jumpstarts ideation and creation with enhanced creative control, powered by the Adobe Firefly Image 3 Model.
- Enhancements to the Type Tool: Introduces faster and less manual ways of creating bulleted and numbered lists, along with improvements in the Contextual Taskbar for shapes and rotating objects.
Both Illustrator and Photoshop benefit from a new Contextual Taskbar that automatically changes to include relevant buttons, related imagery and next steps in the design process, making it easier for users to navigate their projects.
The features are available in the latest versions of both desktop apps starting today.
Customer commitment
Adobe says it is committed to integrating AI in a way that respects and supports the creative community.
The Firefly AI models are designed to be safe for commercial use, trained only on the public domain and on content Adobe already owns or licenses directly from creators such as the images uploaded to Adobe Stock.
However, as VentureBeat previously reported, some Adobe Stock creators weren’t happy that Adobe elected to train the Firefly AI family of image generation and modification models atop their work, especially since that use wasn’t expressly outlined in Adobe Stock’s terms. Yet Adobe pointed out previously to us and others that its licensing terms did permit broad usage of new technologies on content submitted to Adobe Stock specifically.
Earlier this year, creatives and Adobe Creative Cloud users balked when Adobe introduced a new Terms of Service that seemed in their eyes to allow it even broader access to user’s content, including content that they were editing or working on for clients confidentially within Adobe programs and never uploaded to the web. Adobe clarified repeatedly it was only accessing this content to enable AI features to modify it, not to train on it, but ultimately ended up revising its terms to make this even more explicit.
Once again, in the video conference with VentureBeat, Adobe recommitted not to training on customer content unless it was uploaded to Adobe Stock.
“We did not train on your content unless you intentionally submitted it to Adobe Stock,” Hopper said. “That would be the only time that we turn on your content. We are putting creators at the forefront of all of this — you are in control of what goes into the model.”
READ SOURCE