Live: Watch as Adobe unveils what could be the biggest updates to Photoshop, Lightroom this year
Adobe Max is here, which means big changes are coming to Lightroom, Photoshop, Premiere Pro, Firefly, and more
Adobe's annual creativity conference is here – which means the software giant is unveiling some of the biggest updates across programs like Photoshop, Lightroom, Premiere Pro, and Firefly.
On October 28, Adobe kicked off Adobe Max with a long list of announcements, including AI culling in Lightroom, the ability to customize Firefly models with your own photos, agentic editing in Photoshop, and more. Follow along with the keynote and other Adobe Max events for the latest news.
I'm on site in Los Angeles, California, at Adobe Max. Follow along with me as Adobe shares what's coming next.
That’s a wrap on the Adobe Max Keynote for day one. Stay tuned for further updates. On Wednesday, Max includes a second keynote in the morning, but I’m most looking forward to the Sneaks tomorrow night. That’s when Adobe teases the tech that it’s working on behind the scenes that may (or may not) be coming to Adobe software in the future.
One last Sneak before the day one keynote ends – Project Graph is a node-based tool for building creative workflows. This tool integrates features from several different apps to create a single workflow in one place.
Users can drag and drop in new reference images to take that image through the same editing workflow again, automating a process that would typically require opening up multiple apps.
In the demo, Adobe showed how users could build workflows from multiple apps, including Photoshop and Firefly. The workflow asked the program to take the reference image through multiple Adobe programs, including removing the background in Photoshop, generating a composite with a Partner Model, and outputting a video.
The “capsules” can also be worked in the individual apps themselves, giving users more flexibility and control by using things like Photoshop’s native tools.
Here’s one for the photographers struggling with gaining traction on social media. Project Moonlight is an Adobe Sneak for a chatbot that’s designed to help creators on social media.
Project Moonlight integrates directly with Lightroom for importing images to the chatbot. Then, users can ask the chatbot to brainstorm some post ideas. In the demo, the chatbot came up with three ideas. Then, the demo used the AI to take the idea even further. The AI generated different things, like overlays on the image.
Project Moonlight can be integrated with Instagram, which is designed to help the chatbot understand how your particular posts perform and what resonates most with your audience. That also allows the chatbot to respond to questions about engagement and metrics. The chatbot can use that data to strategize ideas that are on brand with previous posts.
Project Moonlight can also be used to apply saved presets to images right in the chatbot. Users can then take the generations into Photoshop or Firefly Boards. Taking the images into Firefly Boards, Adobe demoed applying some of the AI-suggested overlays and ideas.
As a Sneak, Project Moonlight isn't yet available, but something Adobe has in the works for a future Firefly update.
Additional Premiere Pro tools announced today also include new Film Impact tools, which are GPU-accelerated presets for transitions and effects.
The Premiere Pro demo was also heavily integrated into Adobe Firefly. Adobe showcased taking a video of a skateboarder from Premiere Pro into Firefly to change the ending, adding a new trick. This demo is particularly impressive because AI tends to have the toughest time with creating movement that still feels natural.
Firefly’s new ability to generate soundtracks is also a tool that integrates with Premiere Pro, giving video editors the ability to use AI to create a custom soundtrack quickly.
Premiere Pro is gaining AI-assisted masking.
One of my favorite AI tools in Lightroom are the AI masks, but now similar tools are coming for video editing. Adobe demoed AI masking for Premiere Pro during Adobe Max, which was previously a Sneak.
The AI-powered Object Mask moves as the subject moves, maintaining the mask throughout the video. Adobe demoed the feature with a skateboarder an the tool looks quite impressive.
Premiere Pro is getting an Auto Bleep tool.
Auto Bleep is a new tool in beta that allows video editors to censor words, highlighting them in the text panel. Users can create a list of censored words and then Premiere Pro will automatically bleep those words out. It also works with custom sound effects, in case you want cuss words to become a duck quack instead of a bleep.
Lightroom is gaining AI-assisted culling.
Perhaps one of the tools that I’m most excited about from Adobe Max is Assistive Culling, which is an early access beta feature. This tool allows photographers to automatically sort through photos and reject photos with blinks and soft focus. Photographers can then review the results and go through and make sure those rejects are really rejects.
Photographers can then use batch actions, such as adding a flag to the selects, or a star rating. This allows users to use a check mark to apply a rating or flag or another setting to either all the selects or the rejects.
The tool will also create stacks of similar images, automatically sorting out stacks in the series. The AI will then put what it thinks is the best shot at the top of the stack. When photos are taken into Photoshop, the edits are added to the top of the stack.
Lightroom is also gaining some new distraction and reflection removal tools.
The distraction removal tool will also remove people in one click. The tool is designed for tasks like removing tourists from vacation photos.
Reflection removal, first introduced in Adobe Camera RAW, is also coming to Lightroom. The slider uses AI to remove reflections, but take the slider the other way, and you can also enhance the reflection instead of removing it.
Lightroom is getting an automatic dust spot removal tool.
In a demo of the new Lightroom CC features, Terry White demonstrated a new AI dust spot removal tool that both finds and removes the dust spots for you. As a photographer who has had dust spots happen before, I’m pretty geeked about this feature.
Generative upscale is coming to Photoshop, along with third-party models in Generative Fill. That brings some previously announced beta features into the full version of Photoshop, beginning today.
In the demo of updates to Photoshop, Adobe demonstrated a 2x upscale using a generated image from an earlier demo, using Photoshop’s AI upscaling to add more resolution.
Adobe also demoed the feature using a scan of an old family photo, using Topaz as the model to preserve faces. In the demo, the generative upscale seemed to preserve the faces, including a blinking bored child who still looked blinking and bored.
The demo also included some design features, including Dynamic Text, which automatically scales text so that each line is the same length, which adjusts automatically as the creator alters the text box.
In a composite, Adobe demonstrated mixing tools like remove background with Adobe’s precise tools. The Harmonize tool, previously in beta, is a one-click button on the AI toolbar that matches things like lighting and shadows when compositing images. In the demo, the AI even gave the astronaut added to the background a shadow, along with adjusting the coloring and lighting to better match the background.
Generative Fill is also adding support for Partner Models. In another image, Adobe used a selection brush to control where the generative fill added new elements, using Nano Banana as a partner model.
Now, Photoshop has a send to Firefly option, where Adobe then used the image to generate a video with Firefly’s new video tools.
Adobe has announced a partnership with YouTube that allows Premiere Mobile to send videos right to YouTube Shorts.
The Premiere Mobile app will have YouTube Shorts templates, and viewers on Shorts will also be able to create a new video from that template inside the Premiere Mobile app.
The newly announced Premiere Mobile app on iPhone is here, and now Adobe is showing off some features in a demo.
In a live voiceover, Adobe asked the audience to create some background noise. With the new Enhanced Speech tool, the app can help remove the background noise, and the before and after has quite an improvement.
Premiere’s infinite tracks allow creators to layer in photos, to stagger in how and when they appear in the video.
The app’s image-to-video AI allows users to generate a video from a photo using AI.
Generating sound effects is also incorporated into the app. This allows users to mix their voice and a text prompt in order to create a custom sound effect timed to match the video.
Looks are like Lightroom presets for color grading, but the mobile app also has a handful of color grading tools for fine-tuning.
Finally, the demo included text effects, including highlighting the words as they are said in the voiceover. The results can be imported into Premiere Pro on desktop for further editing.
The Premiere Mobile app is already available to download for free in the App Store, with an Android version in the works.
Adobe Firefly is getting a new video editor in beta. In a demo, Adobe showed how the editor is integrated within Firefly, including moving back and forth between the video editor and Firefly’s image editor, as well as audio editing.
In the demo, Lucy Street uploaded a generated edit of an image as a reference, chose a style, then sent to to the Generate Video workspace. She then typed in a prompt, moving from the sketch style to the realistic image.
The video editor has a properties panel that adds controls like speed, duration, opacity, and scale. A timeline looks fairly similar to Premiere Pro.
The video editor also works with Firefly’s speech enhancement and background noise controls, and the before and after in the demo is quite impressive.
Street them demoed how the AI can highlight the pauses in speech, then delete them from the video. The video can also be edited by deleting text from the AI-generated transcript using text-based editing.
Firefly also now has audio capabilities to help create soundtracks for videos.
In the demo, Street used the video generator to take a mural of a turtle and animate it, making it appear like the turtle is crashing through the building in the final video.
The new video editor is in public beta, with the beta version beginning to roll out today. Through December 1, Adobe CC users have unlimited image and video generations with Firefly to try out the new features.
Firefly Boards is Adobe’s platform for creating mood boards and brainstorming ideas, including creating a space to generate ideas for a photo shoot.
Now, Boards is gaining presets, which will mix an image with a different style. Restyling will generate new images based on the selected style.
Collaboration is also coming to Boards, inviting others to edit and chime in on the ideas.
Boards also supports Partner Models such as Gemini 2.5, allowing users to mix multiple images together to create a new generation depicting an idea.
Creative Cloud users can open graphics generated in Boards to other platforms. Firefly's web-based editing tools can also be used to edit those generations.
Adobe Firefly Image Model 5 is here, which Adobe says excels at generating realistic images with textures and lighting, and that’s generated at a native 4MP before upscaling.
The model can also edit the image with a prompt to tweak something while keeping the rest of the image consistent. Adobe says it's designed to change as much as you want, while leaving the rest of the image just as you want it.
Partner models allow creators to choose AI platforms from other companies and switch back and forth between them.
Is Photoshop about to be integrated into ChatGPT? Adobe just demoed that Express is arriving in ChatGPT, but there’s a hidden Easter Egg here – the drop-down menu shows the Photoshop icon too, hinting perhaps at future Photoshop integration into ChatGPT.
Agentic AI is coming to a number of Adobe apps, including Photoshop, bringing a conversational AI tool that allows creators to use natural language to ask the AI to carry out a specific task. The AI assistant allows users to type in a prompt and have the AI carry out the task.
These tools are made to work with the native tools in order to allow creators to keep full creative control. But the natural language element may also help novice creators build inside advanced apps without knowing all the advanced tools.
In Photoshop Labs, Adobe demoed the tool by asking for increased brightness of everything but the subject. The result is created in a layer, so the tool is non-destructive. Creators still have the usual Photoshop tools to use to perfect the results.
The AI assistant can also be used to ask for advice right inside of Photoshop.. In the demo, Adobe asked the AI to review a design layout. The AI suggested more contrast and even gave suggestions for how to fix that.
That conversational chatbot could potentially help new users who are using the AI for tasks that they don’t know how to do.
Adobe also demoed asking the AI to rename all of the layers for them, much to the delight of the audience. The AI does a visual analysis and renames the layers based on what’s inside them.
Adobe is introducing Custom Models, which are personalized versions of Firefly trained on your assets. Announced earlier today, Adobe is now demoing just what that looks like.
Users drop in reference images of their own and load it into a custom model. The AI then scans, tags, and generates a caption based on the content to understand the style. You need at least 10 images.
Then, in Firefly, creators can choose that custom model from the drop-down menu. Creators can also have multiple styles and multiple models for each.
Those assets can then be opened in Photoshop to work on additional details and customize what the AI generated over in the Firefly app, including mixing multiple elements into a collage.
Adobe says the private beta will be available in the coming days.
Adobe has announced a deep integration of Google models into CC and Firefly apps.
Eli Collins, VP of Google Deep Mind, says Nano Banana has generated over 5 million images since launching two months ago.
David Wadhwani, President, Adobe Digital Media Business, says there is now five times more demand for content and that 77 percent of creative and marketing teams are hiring.
This is coming at the age of AI. And to meet creators at this intersection, Wadhwani says Adobe is focusing on three things: Continuing to deliver Adobe's own Firefly models, integrating partner models from third-party platforms, and allowing creators to customize and create their own models.
The first two aren't a surprise, as Firefly has been around for a while and Adobe has already integrated models like Nano Banana, but Adobe just announced the ability to customize your own AI models this morning, which is launching in private beta to Firefly. This allows users to feed the AI their own images to get results that are more tailored to their specific style.
Here's an interesting statistic: Two out of every three creators using the beta version of Photoshop use generative AI in their workflow every day.
Shantanu Narayen, Adobe CEO, is opening the keynote with a statement on AI and "creativity as a universal language."
"Our vision for Adobe Firefly is to make it your one stop destination for creative workflows," he says.
