The irony in an AI musician singing ‘How Was I supposed to Know?’

Billboard's announcement of an AI artist on one of its chart is even more unsettling than it first seems.
Xania Monet (AI image); Telisha Nikki Jones.
Xania Monet (AI image); Telisha Nikki Jones.via YouTube; CBS

Months ago, a musician named Xania Monet went viral on TikTok with a song called “How Was I supposed to Know?” In short videos, women in plaid shirts mouthed along to its mournful chorus under captions that read: “Pov: you found that one song that speaks for your soul.” Three-year-olds in the back seat of cars fumbled their way through its lyrics about growing up without a dad and falling in love with the wrong men. Listeners wept, alone in bathrooms.

There is no Xania Monet. She is a digital avatar created by a 31-year-old woman named Telisha Nikki Jones.

“How Was I supposed to Know?” made its way to radio, rose up the charts and just landed at No. 30 on the Billboard Adult R&B Airplay chart. Its success prompted Billboard to run an article marking the historic moment:

“The first known instance of an AI-based act to earn a spot on a Billboard radio chart.”

There is no Xania Monet. She is a digital avatar created by a 31-year-old woman named Telisha Nikki Jones from small-town Mississippi. Jones, an affable entrepreneur and a self-described creator, has been writing poems since she was 24, and about four months ago she began teaching herself how to use artificial intelligence tools such as CapCut and fal.ai to create a digital persona. She uploaded her poems to an app called Suno, which set them to music. Other than Jones’ words, everything — from Monet’s voice to the melody it sang to the piano chords accompanying it — was computer-generated. Jones began to share the songs.

In short order, she was famous. Or Xania Monet was. Soon, Jones had a $3 million record deal, prompting some human musicians — including R&B stalwarts Kehlani and SZA — to cry foul. But Jones sees the whole situation in historical terms.

“Anytime something new comes about and it challenges the norm and challenges what we’re used to, you’re going to get strong reactions behind it,” Jones told Gayle King this week on “CBS Mornings.” “And I just feel like AI — it’s the new era that we’re in. And I look at it as a tool, as an instrument. Utilize it.”

We tend to associate fears of replacement-by-machine with industrial laborers. But music has been central to the story of automation for a long time.

In the late 19th century, the invention of the the player piano — a self-playing instrument, programmed via something that resembled an early computer punch card — changed the industry. Player pianos were such a symbol of the threat of automation that in 1952, when Kurt Vonnegut wrote his famous novel about an automated society, he called it “Player Piano.”

Anytime something new comes about and it challenges the norm and challenges what we’re used to, you’re going to get strong reactions behind it.”

TELISHA NIKKI JONES

Musicians feared their livelihood would be threatened by recorded music. John Philip Sousa — he of the famous Sousa marches — was so alarmed he published a polemic in Appleton’s Magazine called “The Menace of Mechanical Music.”

“Sweeping across the country with the speed of a transient fashion in slang or Panama hats,” Sousa wrote, hopelessly dating himself, “comes now the mechanical device to sing for us a song or play for us a piano, in substitute for human skill, intelligence, and soul.”

Sousa was, in large part, writing to advocate that royalties from recordings — “mechanical rights” — should be paid to musicians. But he also worried that the phonograph, among other machines, would lead to “a marked deterioration in American music and musical taste, an interruption in the musical development of the country, and a host of other injuries to music in its artistic manifestations.” To drive the point home, the article was accompanied by cartoons, one of which depicted a baby crying as a phonograph blared in its ear. Nobody tell him about Spotify’s “White Noise Baby Sleep” playlist.

Sousa’s screed was an early entry in what became an increasingly common modern genre. Every time a new music machine was invented — like, say, a synthesizer — the musicians union would oppose it. People like me would write articles like this decrying it, and users like Telisha Nikki Jones would explain that it’s a tool, an instrument, like any other. Then, 10 years later, the change would be so widespread that nobody really could remember what the fuss was all about.

If you, like me, are happy that we live in a world of recorded sound, then you may be tempted to think the mainstreaming of AI music is — much like the phonograph — no big deal. Or a big deal that we can and should metabolize.

I would like, though, to do my best Sousa impression and argue the opposite.

Tools like Suno have effectively recycled the work of musicians in the past without their permission, let alone their participation.

In her TV appearance, Jones shows King how she makes a song — pasting in the words of one of her poems and typing in a series of prompts: “Slow-tempo, rnb, deep female soulful vocals, light guitar, heavy drums.” And then she hits create and is presented with two finished recordings. It is perhaps for this reason that Monet’s first album, made in just a few months, has 24 songs on it.

It’s worth keeping in mind that all of this is only possible because, as per Suno’s court filings in response to a lawsuit filed by Universal Music Group, Capitol Records, Sony Music, Atlantic Records, Warner Music and the Recording Industry Association of America, Suno was trained on “essentially all music files of reasonable quality that are accessible on the open internet.” Tens of millions of recordings, many lifetimes’ worth of creative labor, hidden beneath the create button.

Up until now, the dominant music technologies have been about music capture and reproduction. They do make it easier to make music, but someone has still always needed to compose that music, even if it takes fewer and fewer people to record a song. AI music obviates that need, and it accomplishes this because tools like Suno have effectively recycled the work of musicians in the past without their permission, let alone their participation. And, perversely, it could lead to such a glut of music that the economies of listening would be increasingly unfriendly to human-made songs.

This is what concerns me most. The vast majority of AI-generated music will be bland and enervating. But in our overwhelming media ecosystem, some people may just stop caring. Spotify has already been caught replacing real bands with fake ones to avoid paying royalties on streamed music via its Perfect Fit Content program. Many of us let algorithms dictate what we hear next, as they guide us through a never-ending stream of background music we have never heard before, will never hear again and are slowly becoming inured to entirely. Under the circumstances, who’s going to notice if that music is being generated, rather than curated, by an AI?

There is something unsettling about the way Billboard hedges its announcement of Monet’s historic achievement. It’s hedged. The publication calls “How Was I supposed to Know?” the “first known instance of an AI-based act to earn a spot on a Billboard radio chart.” It’s a “potentially historic development.”

It becomes clear that the famed charts have absolutely no faith that they can tell whether or not a song made by AI has come their way before. The difference here is that Jones has disclosed it.

“I literally was crying in my bathroom to your music,” one commenter wrote on Jones’ Facebook page. “And then I found it was AI.”

How were they supposed to know?

test MSNBC News - Breaking News and News Today | Latest News
test test