When artificial intelligence first arrived in the art world in 2017, it was received warmly and with open arms. A.I.-generated works by artists like Obvious and Mario Klingemann were fetching hundreds of thousands at auction, art fairs like Scope in Miami welcomed A.I. art, as did institutions, which eagerly staged exhibitions touting the new technology. Artists working with A.I. were embraced; there was no media outrage or backlash—A.I. art, by and large, was seen as a good thing.
Not so much today. The latest chapter in A.I. art's story has seen artists decrying the tool's widespread use and its violation of creators' rights, with generated artworks sparking outrage online and off. More recently, a handful of industry leaders have even warned of A.I.'s extinction-level threat.
So what changed? Why has A.I. art fallen so out of favor? That's exactly what pioneering A.I. artist and innovator Ahmed Elgammal discusses in a new op-ed for Artnet News.
As an A.I. researcher, professor in the department of computer science at Rutgers University, founder of the A.I. art platform Playform, and the developer of AICAN—one of the earliest art generators—Elgammal is well-placed to observe the trajectory of A.I. art. In his view, the gulf between yesterday's and today's era of A.I. art is down to one thing: the emergence of text-to-image generators. He argues that while these new generators have made it a cinch to generate art, they have flouted ethical considerations, and effectively killed creativity.
To dig into his argument, Artnet News's art & pop culture editor Min Chen spoke to Elgammal to learn more about the state of play for A.I. art—why early A.I. art took off, how text-prompting has diminished the creative process, and where artists eager to work with A.I. should go from here.