When Robots Make Art

Is the human creator's time finished?

The rise of AI and algorithmic art has prompted mass speculation that the human creator’s time is finished and the age of machines is here. Although it could be easy to wave the white flag and let the robots handle art creation from now on, I don’t think we need to be worried.

‍Let’s think about what happened with the rise of a different technology—the camera. After millennia of visual artists attempting to create images of the world using everything from minerals smeared on cave walls to paint brushed on canvas, in 1839 the invention of photography in the form of the daguerreotype suddenly made it  a lot easier to capture the world as we see it. Yet it wasn’t as if visual art disappeared. Just a few decades later, in 1874, Claude Monet debuted a work called Impression, Sunrise, depicting a hazy view of the sun rising over a port. It’s hard to make anything out fully, the loose brush strokes fading into one another rather than trying to create a sharp recreation of the scene. It was this work that marked the resurgence of visual art after the rise of the camera, as many artists abandoned the idea of representative art and moved towards the abstract. 

‍Why represent the physical world when technology can do it better? Artists moved toward showing what could not be seen—the emotional or the metaphysical. Following Monet’s work, a slew of new artistic movements emerged: first the Impressionists, who sought to represent human experience, followed by Cubists creating abstractions out of the world, Dadaists looking to challenge logic and reason, and others like the Suprematists showing the “supremacy of pure artistic feeling” through geometric shapes.

‍So as we move towards a world where art is being redefined by artificial intelligence, maybe we should take a closer look at what we do, and what the technology does—and explore what we can do that machines cannot. 

‍One way to think about algorithms is that they’re sets of steps used to complete a specific task—usually accepting an input and turning it into an output. This mirrors the process of every creator. We all have our own sets of steps that we follow—from coming up with initial ideas to the particular ways that we turn them into a reality.

‍Yet the distance between us and our robot counterparts is vast. Most AI tools are trained on massive models, consisting of thousands, sometimes millions of images, songs, books, essays, or whatever else might be relevant to them. They take in an incredible amount of inputs that stretch beyond the limits of the human brain and remember it all in detail. 

‍These tools are making informed guesses based on everything they’ve consumed, essentially guessing what best completes the sequence after each piece that’s created. However, that doesn’t necessarily represent any sort of understanding of what’s being made.

‍Science fiction author Ted Chiang described the effect this type of algorithm creates:‍ “The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material.”

‍But actual understanding of the material grants humans the ability to make connections between different inputs. Since we cannot focus individual cores of our brains to different activities like a computer can, each input we take in interacts with all of the other ones, sometimes leading to the development of entirely new ideas.

‍In his 1991 work Art & Physics, writer and neurosurgeon Leonard Shlain wrote about how he used this to find new ways of thinking about physics ideas: “Serendipitously, I discovered a way to heighten my creativity. My habit was to read a popular physics book late at night until the snooze gremlin nudged me with the signal that it was time to call it a day. Prior to falling asleep the following night, my mind relatively empty, I leafed through art books. The next morning, I would often connect images I had seen the night before with concepts in physics contained in my previous night’s reading. Something mysterious happens in the creative process during dreamtime, and I am an avid proponent of the school that advocates, ‘sleeping on it.’”

‍With prompting, AI tools can combine disparate ideas when asked to, for example, “write a Seinfeld episode about Gary Vee scamming folks having garage sales,” but ultimately the output of this prompt maintains the form of something that’s existed previously. In the words of Nick Cave, “It will always be a replication, a kind of burlesque.” 

‍The human mind wouldn’t allow us to create something like this. Even as we try to imitate the work of our favorite artists, we create something a little different. Other influences and inputs shape us—the way our hands move, the way the paint splatters.

‍I struggle to imagine that these tools can create something new like the groovy new sound created by instrumental funk band Khruangbin, which cites albums from Iran, Thailand, Jamaica, France, and the U.S. as its biggest influences. Some of the most extraordinary works of art come from unexpected influences—one of the great novels of the 20th century, Invisible Man by Ralph Ellison, pulls together ideas and themes from the French existentialists, stylistic elements from jazz improvisation, and Ellison’s own experiences corresponding with other notable Black writers of his era. Dave Grohl has spoken about how some of his best drumming for Nirvana was lifted from his favorite R&B and disco artists, which was a refreshing sound in the context of grunge at the time. 

‍In each of these cases, a wide breadth of inputs enabled these artists to create innovative work that pushed at the boundaries of what was familiar. Only reading one type of literature or listening to a singular artist or watching one director or even consuming a single type of art isn’t enough to create something new. Since mimicry has been solved by these AI tools, we should strive to take in all forms of art and experience and chart new territory with what we make.

Feb 16, 2023

·

5 min read

When Robots Make Art

Is the human creator's time finished?

The rise of AI and algorithmic art has prompted mass speculation that the human creator’s time is finished and the age of machines is here. Although it could be easy to wave the white flag and let the robots handle art creation from now on, I don’t think we need to be worried.

‍Let’s think about what happened with the rise of a different technology—the camera. After millennia of visual artists attempting to create images of the world using everything from minerals smeared on cave walls to paint brushed on canvas, in 1839 the invention of photography in the form of the daguerreotype suddenly made it  a lot easier to capture the world as we see it. Yet it wasn’t as if visual art disappeared. Just a few decades later, in 1874, Claude Monet debuted a work called Impression, Sunrise, depicting a hazy view of the sun rising over a port. It’s hard to make anything out fully, the loose brush strokes fading into one another rather than trying to create a sharp recreation of the scene. It was this work that marked the resurgence of visual art after the rise of the camera, as many artists abandoned the idea of representative art and moved towards the abstract. 

‍Why represent the physical world when technology can do it better? Artists moved toward showing what could not be seen—the emotional or the metaphysical. Following Monet’s work, a slew of new artistic movements emerged: first the Impressionists, who sought to represent human experience, followed by Cubists creating abstractions out of the world, Dadaists looking to challenge logic and reason, and others like the Suprematists showing the “supremacy of pure artistic feeling” through geometric shapes.

‍So as we move towards a world where art is being redefined by artificial intelligence, maybe we should take a closer look at what we do, and what the technology does—and explore what we can do that machines cannot. 

‍One way to think about algorithms is that they’re sets of steps used to complete a specific task—usually accepting an input and turning it into an output. This mirrors the process of every creator. We all have our own sets of steps that we follow—from coming up with initial ideas to the particular ways that we turn them into a reality.

‍Yet the distance between us and our robot counterparts is vast. Most AI tools are trained on massive models, consisting of thousands, sometimes millions of images, songs, books, essays, or whatever else might be relevant to them. They take in an incredible amount of inputs that stretch beyond the limits of the human brain and remember it all in detail. 

‍These tools are making informed guesses based on everything they’ve consumed, essentially guessing what best completes the sequence after each piece that’s created. However, that doesn’t necessarily represent any sort of understanding of what’s being made.

‍Science fiction author Ted Chiang described the effect this type of algorithm creates:‍ “The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material.”

‍But actual understanding of the material grants humans the ability to make connections between different inputs. Since we cannot focus individual cores of our brains to different activities like a computer can, each input we take in interacts with all of the other ones, sometimes leading to the development of entirely new ideas.

‍In his 1991 work Art & Physics, writer and neurosurgeon Leonard Shlain wrote about how he used this to find new ways of thinking about physics ideas: “Serendipitously, I discovered a way to heighten my creativity. My habit was to read a popular physics book late at night until the snooze gremlin nudged me with the signal that it was time to call it a day. Prior to falling asleep the following night, my mind relatively empty, I leafed through art books. The next morning, I would often connect images I had seen the night before with concepts in physics contained in my previous night’s reading. Something mysterious happens in the creative process during dreamtime, and I am an avid proponent of the school that advocates, ‘sleeping on it.’”

‍With prompting, AI tools can combine disparate ideas when asked to, for example, “write a Seinfeld episode about Gary Vee scamming folks having garage sales,” but ultimately the output of this prompt maintains the form of something that’s existed previously. In the words of Nick Cave, “It will always be a replication, a kind of burlesque.” 

‍The human mind wouldn’t allow us to create something like this. Even as we try to imitate the work of our favorite artists, we create something a little different. Other influences and inputs shape us—the way our hands move, the way the paint splatters.

‍I struggle to imagine that these tools can create something new like the groovy new sound created by instrumental funk band Khruangbin, which cites albums from Iran, Thailand, Jamaica, France, and the U.S. as its biggest influences. Some of the most extraordinary works of art come from unexpected influences—one of the great novels of the 20th century, Invisible Man by Ralph Ellison, pulls together ideas and themes from the French existentialists, stylistic elements from jazz improvisation, and Ellison’s own experiences corresponding with other notable Black writers of his era. Dave Grohl has spoken about how some of his best drumming for Nirvana was lifted from his favorite R&B and disco artists, which was a refreshing sound in the context of grunge at the time. 

‍In each of these cases, a wide breadth of inputs enabled these artists to create innovative work that pushed at the boundaries of what was familiar. Only reading one type of literature or listening to a singular artist or watching one director or even consuming a single type of art isn’t enough to create something new. Since mimicry has been solved by these AI tools, we should strive to take in all forms of art and experience and chart new territory with what we make.

Feb 16, 2023

·

5 min read

When Robots Make Art

Is the human creator's time finished?

The rise of AI and algorithmic art has prompted mass speculation that the human creator’s time is finished and the age of machines is here. Although it could be easy to wave the white flag and let the robots handle art creation from now on, I don’t think we need to be worried.

‍Let’s think about what happened with the rise of a different technology—the camera. After millennia of visual artists attempting to create images of the world using everything from minerals smeared on cave walls to paint brushed on canvas, in 1839 the invention of photography in the form of the daguerreotype suddenly made it  a lot easier to capture the world as we see it. Yet it wasn’t as if visual art disappeared. Just a few decades later, in 1874, Claude Monet debuted a work called Impression, Sunrise, depicting a hazy view of the sun rising over a port. It’s hard to make anything out fully, the loose brush strokes fading into one another rather than trying to create a sharp recreation of the scene. It was this work that marked the resurgence of visual art after the rise of the camera, as many artists abandoned the idea of representative art and moved towards the abstract. 

‍Why represent the physical world when technology can do it better? Artists moved toward showing what could not be seen—the emotional or the metaphysical. Following Monet’s work, a slew of new artistic movements emerged: first the Impressionists, who sought to represent human experience, followed by Cubists creating abstractions out of the world, Dadaists looking to challenge logic and reason, and others like the Suprematists showing the “supremacy of pure artistic feeling” through geometric shapes.

‍So as we move towards a world where art is being redefined by artificial intelligence, maybe we should take a closer look at what we do, and what the technology does—and explore what we can do that machines cannot. 

‍One way to think about algorithms is that they’re sets of steps used to complete a specific task—usually accepting an input and turning it into an output. This mirrors the process of every creator. We all have our own sets of steps that we follow—from coming up with initial ideas to the particular ways that we turn them into a reality.

‍Yet the distance between us and our robot counterparts is vast. Most AI tools are trained on massive models, consisting of thousands, sometimes millions of images, songs, books, essays, or whatever else might be relevant to them. They take in an incredible amount of inputs that stretch beyond the limits of the human brain and remember it all in detail. 

‍These tools are making informed guesses based on everything they’ve consumed, essentially guessing what best completes the sequence after each piece that’s created. However, that doesn’t necessarily represent any sort of understanding of what’s being made.

‍Science fiction author Ted Chiang described the effect this type of algorithm creates:‍ “The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material.”

‍But actual understanding of the material grants humans the ability to make connections between different inputs. Since we cannot focus individual cores of our brains to different activities like a computer can, each input we take in interacts with all of the other ones, sometimes leading to the development of entirely new ideas.

‍In his 1991 work Art & Physics, writer and neurosurgeon Leonard Shlain wrote about how he used this to find new ways of thinking about physics ideas: “Serendipitously, I discovered a way to heighten my creativity. My habit was to read a popular physics book late at night until the snooze gremlin nudged me with the signal that it was time to call it a day. Prior to falling asleep the following night, my mind relatively empty, I leafed through art books. The next morning, I would often connect images I had seen the night before with concepts in physics contained in my previous night’s reading. Something mysterious happens in the creative process during dreamtime, and I am an avid proponent of the school that advocates, ‘sleeping on it.’”

‍With prompting, AI tools can combine disparate ideas when asked to, for example, “write a Seinfeld episode about Gary Vee scamming folks having garage sales,” but ultimately the output of this prompt maintains the form of something that’s existed previously. In the words of Nick Cave, “It will always be a replication, a kind of burlesque.” 

‍The human mind wouldn’t allow us to create something like this. Even as we try to imitate the work of our favorite artists, we create something a little different. Other influences and inputs shape us—the way our hands move, the way the paint splatters.

‍I struggle to imagine that these tools can create something new like the groovy new sound created by instrumental funk band Khruangbin, which cites albums from Iran, Thailand, Jamaica, France, and the U.S. as its biggest influences. Some of the most extraordinary works of art come from unexpected influences—one of the great novels of the 20th century, Invisible Man by Ralph Ellison, pulls together ideas and themes from the French existentialists, stylistic elements from jazz improvisation, and Ellison’s own experiences corresponding with other notable Black writers of his era. Dave Grohl has spoken about how some of his best drumming for Nirvana was lifted from his favorite R&B and disco artists, which was a refreshing sound in the context of grunge at the time. 

‍In each of these cases, a wide breadth of inputs enabled these artists to create innovative work that pushed at the boundaries of what was familiar. Only reading one type of literature or listening to a singular artist or watching one director or even consuming a single type of art isn’t enough to create something new. Since mimicry has been solved by these AI tools, we should strive to take in all forms of art and experience and chart new territory with what we make.

Feb 16, 2023

·

5 min read

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Lens in your inbox

Lens features creator stories that inspire, inform, and entertain.

Subscribe to our weekly newsletter so you never miss a story.

Creator stories that inspire,
inform, and entertain

Creator stories that inspire,
inform, and entertain

Creator stories that inspire,
inform, and entertain