Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Wednesday, June 15, 2022

The sound of music

One of the most important things in my life is music, and to me, music is all about evoking emotion.

A beautiful and well-performed song or piece of music connects to me (and, I suspect, to many people) on a completely visceral level.  I have laughed with delight and sobbed helplessly many times over music -- sometimes for reasons I can barely understand with my cognitive mind.

And what is most curious to me is that the same bit of music doesn't necessarily evoke the same emotion in different people.  My wife, another avid music lover, often has a completely neutral reaction to tunes that have me enraptured (and vice versa).  I vividly recall arguing with my mother when I was perhaps fifteen years old, before I recognized what a fruitless endeavor arguing with my mother was, over whether Mason Williams' gorgeous solo guitar piece "Classical Gas" was sad or not.  (My opinion is that it's incredibly wistful and melancholy, despite being lightning-fast and technically difficult.  But listen to the recording, and judge for yourself.)

Which brings us back to yesterday's subject of artificial intelligence, albeit a different facet of it.  Recently there has been a lot of work done in writing software that composes music; composer David Cope has invented a program called "Emily Howell" that is capable of producing listenable music in a variety of styles, including Bach, Rachmaninoff, Barber, Copland, and Chopin.

[Image licensed under the Creative Commons http://www.mutopiaproject.orgBWV 773 sheet music 01 croppedCC BY-SA 2.5]

"Listenable," of course, isn't the same as "brilliant" or "emotionally evocative."  As Chris Wilson, author of the Slate article I linked, concluded, "I don't expect Emily Howell to ever replace the best human composers...  Yet even at this early moment in AC research, Emily Howell is already a better composer than 99 percent of the population.  Whether she or any other computer can bridge that last 1 percent, making complete works with lasting significance to music, is anyone's guess."

Ryan Stables, a professor of audio engineering and acoustics at Birmingham City University in England has, perhaps, crossed another bit of the remaining 1%.  Stables and his team have created a music processing software that is capable of recognizing, and tweaking, recordings of music to alter its emotional content.

"We put [pitch, rhythm, and texture] together into a higher level representation," Stables told a reporter for BBC.  "[Until now] computers represented music only as digital data.  You might use your computer to play the Beach Boys, but a computer can't understand that there's a guitar or drums, it doesn't ever go surfing so it doesn't really know what that means, so it has no idea that it's the Beach Boys -- it's just numbers, ones and zeroes...  We take computers… and we try and give them the capabilities to understand and process music in the way a human being would."

In practice, what this has meant is feeding in musical tracks to the program, along with descriptors such as "warm" or "dreamy" or "spiky."  The software then makes guesses from those tags about what features of music led to those descriptions -- what, for example, all of the tracks labeled "dreamy" have in common.  Just like children learning to train their ears, the program becomes better and better at these guesses as it has more data.  Then once trained, the program can add those same effects to digital music recordings in post-production.

Note that like Cope's Emily Howell software, Stables is not claiming that his program can supersede music as performed by gifted human musicians.  "These are quite simple effects and would be very intuitive for the amateur musician," Stables said.  "There are similar commercially available technologies but they don't take a semantic input into account as this does."

Film composer Rael Jones, who has used Stables' software, concurs.  "Plug-ins don't create a sound, they modify a sound; it is a small part of the process.  The crucial thing is the sound input -- for example you could never make a glockenspiel sound warm no matter how you processed it, and a very poorly recorded instrument cannot be fixed by using plug-ins post-recording.  But for some amateur musicians this could be an interesting educational tool to use as a starting point for exploring sound."

What I wonder, of course, is how long it will take before Cope, Stables, and others like them begin to combine forces and produce a truly creative piece of musical software, that is capable of composing and performing emotionally charged, technically brilliant music.  And at that point, will we have crossed a line into some fundamentally different realm, where creativity is no longer the sole purview of humanity?  You have to wonder how that will change our perception of art, music, beauty, emotion... and of ourselves.  When you talk to people about artificial intelligence, you often hear them say that of course computers could never be creative, that however good they are at other skills, creativity has an ineffable quality that will never be replicated in a machine.

I wonder if that's true.

I find the possibility tremendously exciting, and a little scary.  As a musician, writer, and amateur potter/sculptor, who values creativity above most other human capacities, it's humbling to think that what I do might be replicable by something made out of circuits and relays.  But how astonishing it is to live in a time when we are getting the first glimpses of what is possible -- both for ourselves and for our creations.

**************************************

No comments:

Post a Comment