Judges in the recent Colorado State Fair were proud to award the blue ribbon to a moody, classically inspired still life entered by new artist Jason Allen – but their decision stoked global controversy after revelations the image was created entirely using an AI algorithm.
Rather than drawing the piece on his computer or editing it using Photoshop-like tools, Allen’s winning image – called Theatre d’Opera Spatial – was created using a text-to-photo tool called Midjourney, which uses advanced AI algorithms to create unique photo-realistic artworks based on a sentence typed in by the user.
Allen experimented extensively with Midjourney, he explained in a post after his win became public, creating and fine-tuning hundreds of images over the course of many weeks before printing his three best images on canvas and entering them into the competition.
“I’ve set out to make a statement using Midjourney in a competitive manner,” he wrote, “and wow! I could not be more excited about having won with my favourite piece.”
Allen’s use of digital tools was within the criteria of the Digital Arts/Digitally-Manipulated Photography category into which he entered the image, but that didn’t stop the win from triggering widespread backlash and debate among the art community, which has long held to the idea that even digitally manipulated images must be the product of human creativity.
Yet a new generation of AI-powered text-to-image generators has recently emerged to challenge all that, tapping online images and machine vision tools to weave together complex and visually convincing graphics that portray almost anything.
From Hotpot and DeepAI to NightCafé, Deep Dream Generator, DALL-E and Wombo Dream, AI-driven art generators are taking creativity to new levels – and, in the time-honoured tradition of conventional artists, each one is developing its own style over time, as one researcher has documented.
Have we taught AI to think?
Whether by generating photorealistic images of cats with pink stripes for fun – or creating imaginative new characters for use in presentations, games, or movies – the new breed of text-to-image generators has become the standard bearer for a quiet revolution in which AI models are proving eerily capable of emulating human thought.
Researchers at Google-owned AI firm DeepMind, for example, recently published the outcome of research in which an AI model called Physics Learning through Auto-encoding and Tracking Objects (PLATO) was able to learn spatial concepts and apply them in the same way that a human infant does.
By showing PLATO videos of balls rolling down a slope or bouncing off of each other, the algorithm learned basic rules about how objects behave – and was able to apply them to novel situations to predict the movement of other types of objects in the same way.
Is this training or human-like learning? Researchers are still debating just how much today’s AIs have actually learned – but as they become increasingly sophisticated, new algorithms are superficially coming to display human-like thought processes.
Earlier this year, a Google engineer was censured after publicly claiming the company’s LaMDA chatbot had evolved to the point that it was sentient, while others were bemusedly noting the outputs of Meta’s new BlenderBot chatbot – which called CEO Mark Zuckerberg “creepy and manipulative”.
Or is it just learning to fool us?
Yet AI engines are not only generating pictures, building video deepfakes and engaging in surreal chats with users; in some experiments, AI-based algorithms are proving so adept at emulating specific humans’ writing styles that researchers are suggesting that AI technology is rapidly climbing its way out of the uncanny valley.
To test this theory, a University of California, Riverside (UCR) research team used OpenAI’s GPT-3 writing analysis engine to answer a series of questions based on the work of philosopher Daniel Dennett.
Dennett was engaged to write paragraph-long answers to each question, as was GPT-3 – with the top four AI-generated answers, along with Dennett’s real answer, put to 425 university-educated philosophy students, online observers, and subject-matter experts on Dennett’s work.
The results surprised the researchers, with even the Dennett experts only able to distinguish his answer from the AI’s answers in around half of cases.
If AI algorithms are not only processing words but able to create semantically accurate replicas of their meaning that are so good they can trick humans, have long-held warnings about AI’s sentience come to fruition at last? And does this suggest the need for a pause to consider its implications?
Researchers are still considering those implications – and while Australia’s Federal Court this year concluded that an AI cannot be an inventor, UCR professor Eric Schwitzgebel believes its increasing humanness will pose serious moral questions.
“Although the machines of 2022 probably don’t deserve much more moral consideration than do other human artifacts,” he notes, “it’s likely that someday the question of machine rights and machine consciousness will come vividly before us, with reasonable opinion diverging.”
“In the not-too-distant future, we might well face creations of ours so humanlike in their capacities that we genuinely won't know whether they are non-sentient tools to be used and disposed of as we wish or instead entities with real consciousness, real feelings, and real moral status, who deserve our care and protection.”