Look at these four images of some well-known people who have died in the past 30 years. They probably look familiar and I’d guess you will be able to easily identify all of them.

Left to right from top: Princess Diana (died 1997), Freddie Mercury (1991), Michael Jackson (2009), and Heath Ledger (2008).

Each of them is imagined as representing what each might look like today, years after their deaths. And that imagining was done by an AI algorithm not a human being.

It’s the work of Alper Yesiltas, a lawyer and photographer living in Istanbul, Turkey. Right there is the dilemma with something like this in describing accurately what it is.

It’s not his work, exactly. He had the idea, the imagination, of wondering “how would people look photo-realistically if some great events had not happened to them”. He asked that question of AI software that took his prompting and created the images you see here. This is just a sampling from his collection “As If Nothing Happened.” You can see all of this experiment that includes six more celebrities on Instagram.

The results are truly extraordinary, so lifelike that you probably would assume without thinking about it that these are actual photographs, the output of using a camera of some type to capture an image of a subject. Yet you would stop and say “wait, hang on a minute…” once you realised who the subjects are in these photos and that they died years ago.

Developments in technology today mean that you can create photo-realistic images like this on a personal computer or a smartphone in ways far more advanced, powerful and simple to use than what’s been possible before.

The hardest part of the creative process for me is making the image feel “real” to me. The moment I like the most is when I think the image in front of me looks very realistic as if it was taken by a photographer.

Alper Yesiltas

The idea that artificial intelligence can be an invaluable tool for artists and creators to imagine and produce compelling digital artworks is one that’s at the heart of debate and argument about the threats these present. There are opportunities, too, but the focus tends to be on the threats.

While the majority attention on this big topic has been on text-prompting a computer program to create images in response – the basic principle in how the most most popular and well-known AIs such as DALL-E 2, Midjourney and Stable Diffusion work – this rapidly-evolving space is already taking big strides onwards from text to imaging as Alper Yesiltas’ experiment illustrates.

People are signing up to these services in droves. Now that DALL-E 2 is openly available and no longer with a wait list, expect that to embrace even more people curious about AI-generated images.

And while the focus continues to be on text prompts, the ability of the AI to generate richer and more detailed outputs from the prompts – which themselves are evolving as people understand how to give the AI ever-more-detailed and fine-tuned instructions – is showing amazing results with lifelike images.

But as the saying goes, you ain’t seen nothing yet.

AI unlocking creativity and utility

A few weeks ago I wrote about the human imagination in AI-generated art and pondered where the line falls that divides the human from the AI in the creativity context. That question remains to be answered and is a central part of the debate that continues.

Take a sideways look in an unexpected direction. For example, friend Steve Coulson has created two comic books where he used Midjourney to create the images.

Steve Coulson's comic books with AI-generated images
Steve Coulson’s comic books with AI-generated images

Expect more of the unexpected!

A question I’ve been thinking about for some time concerns the utility of AI-generated imaging. Meaning, how and where can it and does it add significant value to the creative process rather than only the relatively simple act of responding to written prompts to generate digital images.

Just a few days ago, Ad Age magazine published an in-depth assessment on how advertising agencies are using AI image generators that includes interviews with leaders and creatives in some of the world’s major agencies. That article is behind a paywall so I will reference a few keys aspects here.

AI art composite from Ogilvy Paris, Dentsu Creative Portugal, Wunderman Thompson, Omneky, Rethink, and TBWA/Melbourne / Ad Age

A major introductory point Ad Age makes is that ad agencies are already using these AI tools to save time and money as well as brainstorm ideas. This potentially opens up new horizons for agencies in the future, says Ad Age, such as custom ads created for individuals, a new way to create special effects, or even making e-commerce advertising more efficient.

“We are at the very beginning of a revolution for our creative industry,” David Raichman, executive creative director, social and digital at Ogilvy Paris, told Ad Age. “AI represents an incredible potential that impacts the way we conceive, design, produce and do justice to realize the idea’s fullest potential.”

To get an idea of how exactly agencies are making use of AI tools like those mentioned earlier, Ad Age highlights Dentsu Creative Portugal‘s recent campaign using Midjourney to create abstract images promoting the European electronic music festival Jardim Sonoro.

Also Ogilvy Paris who launched an ad for Nestlé brand La Laitière that altered its logo, which is based on “The MilkMaid” painting by Dutch artist Johannes Vermeer. The final result was an ad that expands the painting, illustrating a new image that shows the rest of the room.

While there are multiple examples of work that used the AI image tools, almost every agency executive that Ad Age spoke with agreed that these image generators can also be used to supplement the creative process.

After a briefing, some of our creatives are spending an hour or two throwing concepts at DALL-E—be it an early line of thought, an oddball question or an impractical art direction notion we could never afford—just to see how it responds.

John Doyle, executive VP, brand experience strategy, Colle McVoy

And this really gets to the point of utility I mentioned earlier. Ad Age says not only do agency executives see AI tools as a time-saver, but they can also be cost-effective because these tools can generate multiple variations of one prompt within minutes.

While many agencies are still in the test-and-learn phase with these tools, that hasn’t stopped creative directors from imagining the possibilities they bring.

For instance, Addition’s Founder and CEO Paul Aaron said what excites him about the future use of the technology is creating custom “generative” advertising that is different for each person. This is a concept most of the people Ad Age spoke to for their article viewed as a possibility in the future.

There’s a lot more to unpack in Ad Age’s landscape assessment which wraps up with some good insights into how AI tools should be used to assist creatives, not replace them.

That, of course, is at the heart of the current debate about the threats posed by artificial intelligence to artists and creators.

It’s not AI that will take your job, it’s the other creative who knows how to use AI that will take your job.

Stephan Pretorius, global chief technology officer, WPP

I think that’s a realistic view of why you need to know how to use these new additive tools because they are fast becoming part of the essentials to creativity, just as PhotoShop did after it launched in the late 1980s – seen as a threat by some at the time. Ditto when blogs for business started gaining traction in the early 2000s.

This is true, in my view, whether you work in advertising, PR or any communciation-related discipline, or if you’re an artist or creator.

Stephan Pretorius is right. He adds, “Today it’s largely still a novelty; by 2025 most creative teams will use them as standard practice.” And he warns, “If you haven’t developed a fluency with these tools by 2030 you will probably be at a significant disadvantage.”

I think that risk of disadvantage is far more imminent than seven or eight years away. Think 2025 at the latest. That’s less than three years away, max.

If you subscribe to Ad Age, you can read their complete report.

Ad Age’s report was the discussion topic in episode 284 of For Immediate Release podcast that Shel Holtz and I present, published on September 28. You can listen to it here.

(If you don’t see the embedded player above, listen on the FIR website.)

[Updated Sept 30:] This rapid development in AI-based tools stepped up a large notch on September 29 when Meta announced Make-A-Video, a new AI system that generates videos from text prompts just like the three AI generators for images I mentioned earlier. An early review by The Verge dubs it as like DALL-E for video: “Just type a description and the AI generates matching footage.”

AI text-to-image generators have been making headlines in recent months, but researchers are already moving on to the next frontier: AI text-to-video generators.

The Verge, September 29, 2022

This new tool isn’t available for anyone to use yet and no word from Meta on when it will be. But I think we should expect it soon even in a work-in-progress form. Hold on to your hats!