IE 11 is not supported. For an optimal experience visit our site on another browser.

G/O Media’s disastrous use of AI is an affront to journalism — and society

The dynamics of business under capitalism will push companies to misuse AI and hurt society in the process.
Photo Illustration: A generic newspaper front page rendered in the 8 bit graphic style
Justine Goode; NBC News / Getty Images

In the first week of July, G/O Media, the owner of media websites that include Gizmodo, The Onion, and Deadspin, published four artificial intelligence-generated articles without input from G/O Media's editors and writers. The articles were terrible: They were filled with inaccuracies, they used awkward phrasing and they didn’t fulfill the promise of their headlines. Gizmodo’s union immediately objected to their publication as “unethical,” noting, among other things, that Gizmodo’s AI-generated content generates false information, plagiarizes work from writers and jeopardizes the brand’s reputation. 

G/O Media stood its ground. In a statement to The Washington Post, spokesman Mark Neschis said, “We think the AI trial has been successful.” Merrill Brown, G/O’s editorial director, sounded downright optimistic. “It is absolutely a thing we want to do more of,” he told Vox

Owners and management view AI as a way to drastically reduce labor costs and maximize profit.

How could G/O media consider these shoddy simulacra of articles a success? The explanation lies not in their views on journalism, but in the economics of artificial intelligence. Owners and management view AI as a way to drastically reduce labor costs and maximize profit. This incentive structure is so powerful that executives can be tempted to embrace it at the expense of fulfilling the value proposition of a company — in this case, websites that sell information that is truthful and insightful. The result is an affront to journalism and an assault on public knowledge. 

One of the AI-generated articles is the Gizmodo article, “A Chronological List of Star Wars Movies & TV Shows” that uses the subheadline “From the prequels to the sequels, here’s the order to watch the Star Wars saga.” Immediately, two things are apparent: awkward phrasing (“here’s the order to watch”) and the contradiction between the two headlines. The main headline promises a chronology of the Star Wars universe, while the subheadline offers a guide on the order to watch them in.

The first conceit for the article — the pure in-universe chronology — is simpler, and on this front the article failed. There was an error in the chronology (which has since been corrected by a human) and there were some productions missing. At the level of quality, there is a typo, the synopses are simplistic, and at least one of them (“Attack of the Clones follows Anakin as he struggles with his emotions and the dark side of the Force”) could be used to describe many Star Wars movies. None of the summaries offer any personal touches or refer to favorite scenes, because they were generated by a machine that did and could not watch the movies, but is, instead, using an advanced form of autocomplete.

The second conceit — the article as a guide for the order in which to watch the movies — doesn’t ever manifest in a meaningful way. For example, any human-created version of the article would likely at least allude to the fact that many Star Wars fans would recommend watching the movies in release order. Viewing the original trilogy (Episodes 4-6) first would allow a newcomer to understand the history and the essence of the series, as well as understand the stakes of a plot twist that gets foreshadowed in the prequels. One of my Star Wars connoisseur friends insists that watching them all in-universe “chronological” order is downright “WEIRD.” People who grew up with Star Wars movies would know this, but many who didn’t grow up with the franchise wouldn’t — and they’re the exact kinds of people who would Google an article like this.

Looking at the article, the average user is unlikely to realize that this article was generated by AI; the only clue would be a byline that says “Gizmodo Bot,” something that’s very easy to overlook. Now, is the article impressive as a technical achievement for AI? Undoubtedly. Is this a good article? No. It’s the kind of article which, if I received it as an editor, I’d send back to the writer demanding a total rewrite, while privately wondering if they were a dumb robot.

The other G/O Media AI articles are also failures. An AI-generated article published at The Takeout promising “The Most Popular Fast Food Chains in America Based on Sales” contains no sales figures, offers no explanation of whether the list is ascending or descending, and lists generic, repetitive descriptions of what restaurants are reputable for in ways that don’t make sense (McDonalds is “known for their iconic golden arches” and the KFC description, “Famous for its finger-lickin’ good fried chicken,” reads like sponsored content). If a freelance writer sent that to me I wouldn’t just send it back to them for a rewrite, I wouldn’t publish it at all. But G/O Media did. 

Why? The company is candid about it. Brown, the editorial director, said he’s learned that AI content “will, at least for the moment, be well-received by search engines.” (Note: Google’s search liaison denied Brown’s claim, saying, “This isn’t correct. Our systems are looking at the helpfulness of content, rather than how it is produced, as we explained recently.”)

But it’s true that these articles are garnering some traffic. The Star Wars story received over 10,000 page views in a matter of days: not an impressive number by digital news site standards, but also not a number that’s trivial. And if G/O Media is considering the idea of scaling this model — maybe putting out hundreds of such articles a day with minimal input from humans — the collective ad revenue generated by eyeballs could, theoretically, be substantial. It’s likely a particularly appealing model for a company that, after being purchased by a private-equity firm, unleashed round after round of layoffs. And while G/O Media denies that it plans to reduce its editorial head count, the logic of the company’s AI profit strategy would naturally push it in that direction.

Embracing largely uncurated AI as an editorial strategy is to show disdain for the very idea of public enlightenment.

Spamming the internet with AI-derived content is a myopic business decision for any serious media enterprise. This could be a short-term win for traffic, but it’s hard to see how it could be a long-term win for the brand if the company is known for being a low-quality bot content farm. Over the longer term, it’s possible that Google’s algorithm will catch on and downgrade the site’s rankings in search results. And mass producing bad articles is going to demoralize staff writers at their sites, who already resent the way that AI-generated articles jeopardize their brand’s reputation. G/O Media is not the only media brand using AI, but aside from CNET’s hilarious failed experiments, the company has probably been the most brazen so far in generating content with minimal human input.

On the level of principle, things look worse. A media company isn’t supposed to be a shell operation for duping readers into generating advertising revenue. It’s supposed to offer actual value to readers, based on a good faith attempt at sharing quality information. That value cannot be generated singlehandedly by any AI that currently exists, which, even in its most advanced form, has no ability to comprehend reality or distinguish truth from falsehood. Moreover, even though the media sector is private, everybody who works in media knows that their work is inextricably tied to the public good of reliable information. Embracing largely uncurated AI as an editorial strategy is to show disdain for the very idea of public enlightenment. 

The longer-term, more complex question is what role AI might play in media companies that understand and respect the fact that chatbots cannot be trusted as reliable arbiters of information, but can speed up research and suggest templates for writing. Google is already showing media outlets such as The New York Times what its AI technology can do to help journalists write stories. But as AI grows more advanced, any private company will lean in the direction of using AI to do what humans can at a lower cost, even if AI can’t do it as well. The culprit here isn’t AI, but the kind of economic system that drives companies to use AI to do things that it can’t and shouldn't be directed to do.

test MSNBC News - Breaking News and News Today | Latest News
IE 11 is not supported. For an optimal experience visit our site on another browser.
test test