According to the Financial Times, Pablo Picasso once said, “Computers are useless. They can only give you answers.” Unfortunately for us, computers may now be asking more questions than they answer. As a result, the possibilities are rather overwhelming, with answers more ambiguous and uncertain than straightforward. Similarly, we might ask ourselves where we draw the line when it comes to what we find ethically acceptable in terms of artificial intelligence (AI) as it relates to composition/creation in the worlds of art, writing, performing arts and music—as well as liberal arts education.
Let’s begin with composition. Most of us are aware of music streaming services that select songs for us based on data about users’ listening preferences. There are services like Spotify, which utilize what’s called collaborative filtering—basically a function of data comparison between users, in order to maximize the likelihood of matching listeners with music they’re likely to enjoy, based on comparisons of your musical preferences with those of someone similar. There are also “lossless” streaming services offered by similar services like Tidal.
“What idea of the human do we want to have reflected back to us?”
Going beyond automated music curation, algorithms and deep machine learning is AI, which is now making it possible for computers to write melodies. For example, Sony CSL Research Laboratory employs an AI system called Flow Machines which mines a database of songs and selects aspects from various tracks in order to produce a song. The difficulty is that while music on demand sounds all well and good—especially for small businesses and retail brands looking for inexpensive marketing music for advertising purposes—it’s potentially troubling news for musicians and recording artists who need markets in which they are fairly compensated for their work.
There’s also a new streaming service called Brain.fm which makes ambient music that taps into auditory rhythms and, “influences cognitive states by entraining neuronal oscillations.” In other words, the beats and rhythms in the music mimic the brain activity during different cognitive states like focus, relaxation, and sleep. It’s hard to believe, but it seems to work, from what I’ve tried—I’m actually listening to the “Focus” channel at the moment (out of sheer curiosity). However, I notice that the music isn’t there for my enjoyment of it, per se; rather, it’s almost as if it isn’t there, and it’s simply helping me concentrate on the task at hand.
I certainly wouldn’t feel moved to put it on the stereo while hard at work getting poetry written, say, or painting the walls of my apartment. For that, I’d prefer either silence or the singular compositional lens of someone like Erik Satie, Colleen, Aphex Twin, or This Will Destroy You. Why? They’re more inspiring because I know I’m listening to an artist or musical group’s unique vision. Also, they’re simply better, from an artistic/musical standpoint, than the oddly disembodied ambience of Brain.fm.
And then there’s Jukedeck, a site that can create songs for site visitors using algorithms. Perhaps in anticipation of a creative classes’ fears that must be assuaged, Patrick Stobbs, the co-founder and COO, said, “It isn’t meant to replace human musicians—if anything, it can make their lives easier by doing some of the composition work for them.” Although the need for groundbreaking and unique musicians isn’t likely to go away anytime soon, it does make it feel as if it would behoove us to understand or learn a bit about the world of AI and machine-based learning, if only to have the ability to get involved with it and use it to our advantage, rather than finding ourselves blindsided by it.
In the world of arts education, it can only be to our advantage to have the knowledge and skills we have, since machines are unlikely to possess the same ability to, say, read and analyze a student’s poem and or work with a piano student specializing in music composition, and provide personalized feedback and an opinion on the integrity or mood of the bridge, for example. Although we can recognize that AI cannot provide an adequate replacement for human teachers and tutors, that recognition shouldn’t mean total rejection of any sort of adaptive technology such as adaptive learning software or learning management systems. Those of us in education who are interested in pursuing graduate work in educational administration and leadership positions may want to investigate the humanitarian and leveling potential for students with less socio-economic means.
For example, consider the potential of adaptive learning software, in terms of providing an affordable alternative to textbooks. Since the software adapts to each individual student’s pace of learning, it differs from traditional textbook-style learning that occurs in a predictably linear fashion. Of course, there are fewer physical books involved when software makes up the bulk of the curriculum, so ideally students will spend less money on textbooks for their classes. For example, take ALEKS—which stands for Assessment and Learning in Knowledge Spaces. Developed by a team of mathematicians, scientists and engineers from UC Irvine, ALEKS is designed to test and illustrate specific mathematical theories. Moreover, the cost comes to approximately $25 per student, as opposed to $100 per student.
There are also MOOCs (massive open online courses) which are free or low-cost online college courses accessible to anyone with a reliable Internet connection and a willingness to learn. This last point is an important one, since studies have shown that online courses have a 10 to 20 percent higher failed retention rate than traditional classroom environments. So the case for MOOCs seems inconclusive thus far, however, I believe that arts education seems relatively unlikely, given the fact that most fine and performing arts-related skills require live, experiential practice and interaction with a professor, as opposed to merely having a computer evaluate the content of students’ answers online.
The closest an AI-related technology has come to making art that is well known by a sizeable audience is Prisma which turns photos into images resembling digital paintings. Although some of Prisma’s renditions of photos are appealing, I wouldn’t liken it to art. The same and more can definitely be said about AI-penned poetry, which, when attempted by a Google bot, is questionable at best. That particular bot’s poetry may be an especially egregious example, though, as Oscar Schwartz seems to have found a few that weren’t terrible though they certainly aren’t ideal. However, since bots are also required to reflect our selves back to us, Schwartz poses this question: “What idea of the human do we want to have reflected back to us?”
Considering the fact that we don’t have this whole existence thing figured out for ourselves yet, is it a fool’s errand to hope that machines could figure it out for us? I’m going to say yes, and I’ll also argue that AI is no substitute for singular consciousness and individual, unique life experiences that can’t be replicated or mimicked by robots. For data-centric tasks, AI-infused supercomputers like Watson will do the trick, but not for sculpting an innovative statue that isn’t an obvious imitation of someone’s existing work. This is especially true when it comes to humanities and the performing arts—which is perhaps why they’re the least compatible with machines. Let’s hope it stays that way.