The best AI image generators allow you to quickly create images from text descriptions. They're controversial, and the technology is evolving so fast that it's hard to know where it will take us, but one thing seems certain: anyone working in the visual arts will need to know about them, photographers included.
AI capabilities already existed in popular photo editing software like Photoshop, Lightroom and Luminar Neo. It's through machine learning that these tools can detect the sky or subject in an image, remove unwanted objects or adjust facial features. But the best AI image generators can create a whole image from scratch based only on a text prompt.
These machine-learning models have been trained on vast datasets of millions of images and captions, usually trawled from the web. Most work in a similar way. You write a text prompt describing the image you want to create, set any parameters and the model does its thing. To learn more about them, see our AI image generators FAQ. In the meantime, read on to discover the best AI image generators available today.
The best AI image generators
Why you can trust Digital Camera World Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out how we test.
We think DALL-E 2 is the best AI image generator for most people who want to start exploring the technology. It's the best-known of the current batch of tools, and it's capable of producing stunning results, including photorealistic images with incredible detail. It's also very easy to use.
You need to create an account, and you'll need to buy credits if you want to use it regularly, but getting started is super quick and the main text-to-image function is intuitive enough that you shouldn't need to go searching for tutorials. The actual process of image generation is also relatively quick.
As with all of the tools on this list, we tested DALL-E 2's ability to create a range of different types of imagery. That included using prompts that included specific cameras and lenses to try to obtain photorealistic results. We found that DALL-E 2 can produce extremely clean images that would be hard to differentiate from photographs, and the results often fit what is described in the prompt.
DALLE-2's text-to-image generator may be a little too limited if you want more control – there's no option to change the size or the aspect ratio of the 1024x1024 canvas, and no option to add a negative prompt (see Stable Diffusion below). On the other hand, the inpainting and outpainting editing features are among the most advanced.
The former can allow you to paint over part of an image and have the AI generate something else in its place. The latter allows images to be "uncropped", expanding the picture beyond the original frame. This could even be useful for photographers who cropped an image too far or didn't have a wide enough lens to capture the ideal composition.
DALL-E 2 is no longer free, but you get enough free credits to start with to be able to get a decent idea of how it works and what it can do. After that, you get 15 free credits each month. They don't go far but at least let you continue to experiment. More credits are fairly affordable to buy – just bear in mind that you're likely to generate a lot of images that you don't want along the way.
For years now, Adobe has been using AI to make tools such as Photoshop and Lightroom easier to use. So it's not surprising that it wanted to get in on the AI image generation game. However, this does pose an existential problem for the company.
After all, Adobe makes most its money by selling Creative Cloud subscriptions to creative professionals like photographers and artists. Yet, these are the exact people whom AI image generators threaten to put out of business. So Adobe's come up with a cunning plan to square the circle.
Essentially they're positioning Firefly as the first 'ethical' AI image generator. That's because it's only trained on images in the Adobe Stock library, where the contributors have allowed them to be studied, and content in the public domain. Thus Adobe has sidestepped any claims that it's infringing people's copyright.
Another part of Firefly's special sauce is that it integrates nicely with Creative Cloud tools such as Photoshop and Adobe Express. That said, you'll need a Creative Cloud subscription to take advantage of all that. For more details, see our article on How to make Firefly work for you.
The best AI image generator that you can use for free is Stable Diffusion. However, it requires a bit of technical knowhow to do so. As an open source program, the code is freely available on Github, which has made it a hit with developers looking to incorporate AI image generation into their own apps. If you have no idea what to do with the code, you can also run Stable Diffusion for free via Google collabs (you'll need to click 'Connect', and then click play on 'install the dependencies' and on 'run the app').
You can avoid this hassle by using Stable Diffusion via Stability AI's web app, DreamStudio, which is almost as clean and intuitive as DALL-E 2 (but like DALL-E, it requires you to buy credits). In either case, we found that Stable Diffusion is very close to DALL-E 2 in terms of the range of image styles it can produce, and perhaps even has the edge in when it comes to photorealism.
It also offers more control than DALL-E 2 currently provides, with a flexible aspect ratio, the ability to upscale resolution to 2048x2048 and the option to add a negative prompt specifying what you don't want to appear in the image. You can also set the seed, a number that controls the randomness of a generation, which means it's possible to create the same image again when using the same prompt (other generators can produce totally different results each time even if you use the same prompt).
We were also impressed by the depth-to-image tool, which can infer the depth in a composition of an existing image and transfer it to a new creation. If you're prepared to take the time to work things out and learn some new terminology along the way, Stable Diffusion is the most flexible AI image generator available.
If you use it in DreamStudio, it costs $10 for 1,000 credits, which is enough to generate around 5,000 images at default settings (the higher the resolution and number of steps in the generation, the higher the cost).
If you've seen AI art depicting dark fantasy scenes and futuristic landscapes in an almost painterly style, the chances are it was created in Midjourney. It's hard to tell if this is because of the characteristics of the tool or because of the community that uses it – it's probably a mix of both. We found that it seems to have a more limited range in the style of images it can produce, and it does best producing painterly styles. However, we did also manage to get impressive photorealistic images too with some perseverance.
Like Stable Diffusion, Midjourney isn't the most intuitive tool for those used to traditional desktop or browser-based apps. After registering for the beta on the Midjourney website, you'll need to join the video game-oriented social messaging platform Discord. Instead of typing into a prompt box like with most other AI image generators, you send a query to a Midjourney bot in Discord.
Initially, this feels like writing to a chatbot in a room full of other people doing the same thing. There are several channels, and you can choose any that the bot is in (look for one of the 'newbie' channels). You use the slash command '\imagine' and then type your prompt. Everyone else in the channel will be able to see your request and the results, and other people will be making queries too. This means yours will move up the feed before jumping to the end again once it's finished rendering.
This can make the tool awkward to use since your queries can get lost in the sea of requests (they'll be highlighted orange and you can always find the images in your account profile). It can also be slow if the channel is busy. But the advantage of this mechanism is that you get to see other people's prompts, which is a great way to learn. If you pay for a subscription, you can avoid this by using a bot privately on your own Discord server. More expensive subscription plans give you more faster generations, and the $48-$60/month 'Pro' plan permits use of a 'Stealth' command, which stops your images appearing in the member gallery.
Like with DALL-E 2 and Stable Diffusion, you can also upload your own images to use as references for compositions, but here too, we found the process to be more convoluted than with other tools. We also found that the model seems to take less notice of the source image – something that can be fine tuned in Stable Diffusion. Basic membership, good for up to 200 images, costs $10 a month. Unlimited images will cost you $30 a month. Pricing is lower is you opt to be billed annually.
The best AI image generator for those who want to learn how the tech works with no fuss and no payment is Craiyon. There's no need to create an account, no need to run any code and no talking to chatbots. Just go to the website, type what you want in the big box, and Craiyon will get to work. It couldn't be any easier.
The downside is that the resulting images can be strange, glitchy and sometimes just plain frightening. Formerly known as DALL-E mini until OpenAI had words, Craiyon has almost become a genre in itself thanks to its tendency to create mangled images, particularly human faces, but that could change. It says its It's working on a better image encoder.
The unreliable results aren't reason to write Craiyon off. We found that it's capable of turning up surprises that look quite reasonable. It's also surprisingly diverse in its output, which could make it an springboard for new ideas. That said, it's features are limited. There's no inpainting or outpainting and no image-to-image generator. The only thing you can do other than create an image from a text prompt is have T-shirts printed with your designs, should that really take your fancy.
Artbreeder is a different kind of beast from the best AI image generators we've mentioned so far. It's based on different technology for a start, using generative adversarial network (GAN) models rather than diffusion. But its interface and what it can do are quite different too. It has too distinct tools: Artbreeder Splice and Artbreeder Collage. The former lets you remix – or 'gene edit' – photos, either those that are already on the site or original images of your own.
This tool has some quirks. It can only handle portraits and landscape photos at the moment (support for other types of image is said to be coming), and images need to be very clean and of high resolution. If you upload a photograph to DALL-E 2 or Stable Diffusion, it will look the way you expect it too, at least until you start generating variations. But even clear, high-resolution portrait shots can end up full of artefacts once they're uploaded to Artbreeder. The subject really needs to be well lit, looking face on and have a clean background.
Find a photo that Artbreeder likes, though, and you can make also sorts of tweaks, changing hair length and colour, facial expression, gender and age in portraits or changing the amount of vegetation, water or weather conditions in landscapes. We find it can be a lot of fun to play around with, and you can use it to create amusing transformations of selfies. Some people have found professional uses for it too. The designer Daniel Voshart uploaded photos of busts of Roman emperors to Artbreeder Splice and turned them into photorealistic images that he now sells prints of.
The second tool, Artbreeder Collage, is a text-to-image generator combined with a collage maker. That's as strange as it sounds, but it kind of works, and it's interesting that it doesn't depend only on text like the tools above. You draw or drag and drop shapes and images onto the canvas (which can include photos of your own, which you can upload), and then you type a text prompt. I found it works well for creating images that look like illustrations – I uploaded a photo of a hummingbird, placed it over an image of a river and asked for a Van Gogh painting, and it delivered a clean if rather cartoonish image but clearly a pastiche of Van Gogh's style. Achieving photorealistic results is more difficult.
Most of the other AI image generators currently offering open access are based on the first two models in our list. That is, they use DALL-E 2 or Stable Diffusion and add their own UI, and sometimes further training in specific types of imagery, on top. NightCafe Creator is an interesting option because it allows you to choose between several models, including both DALL-E 2 and Stable Diffusion, as well as its original VQGAN+CLIP model and more coherent GLIP-guide diffusion model.
We found those earlier models to be a bit hit and miss. In the words of NightCafe itself, the results from its original model "don't seem to obey the laws of physics", in that subjects might end up floating in the sky, for example. The coherent model is more reliable but still better for artsy creations than photorealism. The DALL-E 2 and Stable Diffusion-powered generators are, as we'd expect, more reliable. Using them in NightCafe, offers some benefits but loses others.
NightCafe holds our hand more than these model's own UIs. It lets you choose a type of image to generate, for example, although you can turn on 'advanced mode' to get more flexibility. What we lose are newer editing features like inpainting and outpainting and Stable Diffusion's depth-to-image tool. And it's not free. You get only five credits to start with – enough for just five generations – although these get topped up each day, and you can earn more by completing certain tasks and challenges. If you want to purchase more, you can buy packs or subscribe from $9.99 per month for extra benefits.
How we test
We asked each tool to produce a range of different kinds of images, from illustration to photorealism, using text prompts. We also tested their image-to-image generating tools and editing features where they exist. The technology is evolving so fast that the features available can expand from one month to the next, and there are likely to be more options appearing soon, but for now these are best AI image generators that we've tested.
AI image generators FAQ
How should I choose the best AI image generator?
There are several things to consider to choose the best AI image generator for you. They include what you want to use it for, how much time you want to spend getting set up, what kind of results you expect and whether you're prepared to pay for it.
If you've never used an AI image generator and want to very quickly see how they work, you can jump into Craiyon immediately and experiment all you like. For the best balance between the ease of use and quality of results, however, we'd suggest trying DALL-E 2, which is capable of producing impressive photorealistic images.
DALL-E 2 also has powerful editing features known as inpainting and outpainting. The first lets you paint over parts of an image to remove them and replace them with something else using the AI. Outpainting lets you 'uncrop' an image, expanding it beyond its original borders. Stable Diffusion offers higher resolution and more control, and it can be used for free, while Midjourney is impressive when it comes to particular styles and it has a strong community.
How do the best AI image generators work?
Most of the best AI image generators are based on machine-learning models that have been trained to recognise the relationship between images and text. You type in a short text prompt describing what you want to create, and the AI model will attempt to create that image based on the images and captions it's been trained on.
The most recent AI image generators use diffusion models. They start out from random dots and begin modifying that noise to move towards the final output as they recognise aspects of the image. In some generators you can choose how many steps you want the model to take, which will influence how long it takes to generate an image.
How do I get the best results from an AI image generator?
Even the best AI image generators can produce truly terrible results. By nature there's an element of haphazardness to it, and in most generators even using the same prompt that resulted in a great image one time, won't give you the same image when you use it again.
Generally, the more information in the prompt the better. A lack of detail tends to produce unimpressive results, while mentioning things like the style of photography and even a brand and model of camera and the focal length of a lens can lead to better results if you're aiming for photorealism. Some people have reported getting great results from DALL-E 2 by using 'Graflex' in prompts.
Finally, even the best AI image generators have many quirks and produce images with strange artefacts you'll want to fix in traditional image editing software. Human figures are particularly prone to contortions and can end up with the wrong number of fingers or with eyes looking in different directions. Problems with faces can often be corrected in Photoshop using Adobe's Neural Filters.
Why are the best AI image generators controversial?
There are several reasons that the best AI image generators are causing controversy. One of the main issues is the fear of misuse to create violent, abusive or pornographic content, and also the fear that people may try to pass off images generated by AI as real, spreading fake news or defaming people.
There are also big questions about copyright, both whether someone can own the copyright to an image they created using AI and whether it was legal to train AI models on images from trawled from the web without the consent of their original creators. Finally, some people have concerns about what they might mean for the future of jobs in some creative sectors.
You might also like: