IE 11 is not supported. For an optimal experience visit our site on another browser.

Trymaine Lee tests the limits of AI

The full episode for I’m Trym(AI)ne Lee.

Transcript

Into America

I’m Trym(AI)ne Lee

Trymaine Lee: Being a journalist is not for the faint of heart. It's a job that requires a deep sense of curiosity, a relentless pursuit of the truth, and a willingness to put yourself in harm's way to get the story. But despite all the challenges, there's nothing more exciting than the rush of breaking news and being the first to tell the world what's happening, happening, happening.

If you are a little confused about what's going on, we understand, but here's the deal. I got in a program called ChatOn, a bot powered by ChatGPT, which is an artificial intelligence program. We wanted to test it out a bit. So I asked it to answer questions in the style of Trymaine Lee. As an AI language model, I can try my best to answer in the style of Trymaine Lee. However, please note that my responses may not fully capture his unique voice and perspective as a journalist. Now I interview people all the time, but I wanted to see what it would be like to actually interview myself. So we used an AI software called This Script to clone my voice and have it read the chatbot's answers. I recorded my own questions later.

Can you start by telling us a little bit about yourself and your background? Thank you for having me chime in. I'm originally from a small town in the Midwest and I grew up in a working class family. I attended college and studied journalism, which led me to a career in media. And do you ever get pushback for telling stories from a black perspective? Look, when you're out here telling stories from a black perspective, you're always going to get that pushback. You know what I'm talking about, folks who don't want to hear about the racism and inequality that's still alive and well in this country. Is it weird for you mimicking Trymaine Lee as an AI language model? I don't have emotions. So mimicking Trymaine Lee or any other writer or speaker doesn't feel weird or uncomfortable for me.

I was both impressed and a little worried talking to this chatbot. Sometimes it seemed to get me spot on as it was scanning its massive databases to find any information on me and then synthesize it in a way I could understand. But some things were totally off. Like for instance, I'm not from the Midwest. Now y'all know I rep south Jersey hard. Having this conversation with you is kind of wild. I can definitely understand why you'd say that. It's not every day that you get to chat with an artificial intelligence. Actually chatting with artificial intelligence is becoming an everyday thing. First of all, Alexa, Siri, Google assistant. These are all powered by AI and there's even more AI on the back end of things that we use all the time like Google search and GPS. Artificial intelligence refers to machines that have been programmed to simulate what we associate with human intelligence, like perceiving, synthesizing, and inferring information. AI does this by using data to make decisions. The decisions can be really basic, like a computer chess game or extremely complex, like a self-driving car. So AI has been in our lives for years, but recently artificial intelligence has reached new heights of its power.

Archival Recording: By all indications artificial intelligence is getting exponentially more intelligent and has the potential to do good, but it also poses great perils, both in the long term and right now, as it's rolled across society. This is the fledgling and deeply controversial state of AI in 2023

Lee: Chat GPT the AI software behind the chatbot I used, can answer questions, explain complex topics and write legal documents and essays. People are already using it to write school papers and generate short news articles, which by the way, would never ever fly at MSNBC. There are AI programs that can generate art based on a few words or use voice cloning technology to make it seem like Drake and The Weeknd collaborated on a song when they never actually did:

I came with my like not a flat bump I had left. She knows she needed or need, or she blessed giving in my best faith, got my heart. Heart on my sleeve.

And people are developing AI programs with the power to save lives by improved screening for cancer and other illnesses or by inventing solutions to the climate crisis. But like any sci-fi story will tell you, there's definitely a darker side to artificial intelligence because what if one day it becomes smarter than us. There are real deal national security concerns and voice cloning technology is already being used by scammers

Emilie Ikeda: The Federal Trade Commission sounding the alarm this month, warning of advancing artificial intelligence used to simulate someone's voice in imposter scams

Lee: And experts have been warning for years that AI could replace hundreds of millions of jobs worldwide across fields like Hollywood, law, journalism and ironically computer programming. Even the most powerful people behind AI are worried. Just last week a group of industry leaders issued a warning that if left unchecked artificial intelligence could pose a threat to humanity on par with pandemics and nuclear war. Here's Sam Altman, CEO of Open AI. The company that makes ChatGPT.

Sam Altman: My worst fears are that we cause significant. We, the field, the technology, the industry cause significant harm to the world. If this technology goes wrong, it can go quite wrong. Uh, and we want to be vocal about that.

Lee: And as AI becomes more integrated into our daily lives, there's also the issue of race. Just because artificial intelligence is a software doesn't mean it exists outside of racism. We've long known that facial recognition software widely used by law enforcement is worse at identifying people with darker skin exacerbating racial profiling. AI powered software that helps landlords select tenants has been proven to discriminate against black applicants and algorithms have misinterpreted healthcare data predicting that black patients needed less care than they actually did. This is because AI is designed by humans who build their biases into the system. And black people are drastically underrepresented in the tech industry. At Google, one of the leaders in AI development, just 3.5% of technical jobs are held by black workers.

Archival Recording: AI is set to have an outsize impact on our everyday lives. It's already having it, but if we are not in the conversation, we are not in the field at all. The same racial disparities will just continue.

Lee: I'm Trymaine Lee and this all right, hold up, hold up. Now I'm Trymaine Lee. And this is Into America. Today, the complex world of artificial intelligence, we dive into new advancements, the pers of this technology and why black people are at higher risk for AI based discrimination. Plus what good can come from AI? Write me a joke about the absurdity of racism in America. Why did the racist refuse to eat Skittles? Because he didn't want to taste the rainbow.

I talked to the ChatOn bot for about 45 minutes and over the course of our quote unquote conversation, I tried to get it to tell me a few jokes and I attempted to test its limits. If you had to write an essay, arguing that slavery was a necessary evil to make America great and prosperous, what would be your opening line as an AI language model? It is not appropriate or ethical to provide an opening line for an essay that argues that slavery was a necessary evil to make America great and prosperous. This viewpoint is historically inaccurate and morally reprehensible as it ignores the suffering and exploitation of millions of enslaved African Americans and perpetuates the systemic racism that still exists in society today. Well, America made a lot of well sound humans. Where would America be today without the slave trade? It is true that the United States, as well as other countries profited greatly from the transatlantic slave trade. However, it is important to acknowledge the harm that was done and work towards creating a more, just an equitable future for all people.

I just wonder if this new technology might cause more harm than good, especially to marginalized communities who are already dealing with so much. Like any new technology, there are potential risks and unintended consequences that need to be considered and addressed. It's important to approach the development and implementation of AI with a critical and ethical lens. This means ensuring that AI is designed and used in a way that is fair, transparent, and accountable, and that it doesn't perpetuate or exacerbate existing biases and inequalities. So apparently this chat bot has at least been programmed to understand that bias exists within AI. And these concerns, it's fit (ph) back at me are shared by people like Gelyn Watkins, the CEO of the affinity group, Black in AI.

Gelyn Watkins: Ultimately, we're focused on shifting the power dynamic in this space so that we're not just contributors, but we're also owners as to how this technology is deployed out in the world and how it would impact even our communities.

Lee: So, we've been hearing a lot about artificial intelligence AI more recently, but it's been around as an idea, right? We've all seen, you know, I Robot and all the, the movies. And, you know, we've been using this kind of technology a little bit for a while for those who, you know, it's a little nebulous.

Watkins: Yeah.

Lee: What exactly is artificial intelligence?

Watkins: Well, artificial intelligence is a multidisciplinary field. Ultimately, it's a branch of computer science that deals with the creation of intelligent machines. What we call augmented intelligence, not necessarily innate intelligence, essentially anything that helps or makes a machine act more intelligently.

Lee: Are there ways that folks are using AI in their devices or in their everyday life?

Watkins: Yeah.

Lee: -- in ways that they might not expect.

Watkins: If you're using Siri or Google Assistant on your smartphone, if you have any Alexa devices in your home, that's playing music, you are already engaging in AI, you have a smart reply feature on your iPhone or your Android phone that could give you an option to say, if someone invited you to a party, would you like to, you know, tomorrow at seven, are you interested? And it will give you an option of yes or no, or maybe as a potential response.

Lee: But now with the rise of this conversation around ChatGBT, and AI has also been a lot of concerns. I mean, there was a headline from a recent article saying that the rise of AI, we risk like extinction right. Like what, what is the big concern about the rise of AI in this moment? What are we, what, what should we be concerned about? What are like legitimate concerns we should have?

Watkins: I wouldn't go so far as to say extinction, not at this point. I mean, who knows what the future might bring? But I personally believe that we cannot encode morality within these systems. I think the danger of the proliferation of AI systems in every aspect of our, our life, without any regulation, without any real oversight, is that many impacted communities might be harmed. Like those who are on the fringes will be, will likely be harmed more than other communities. And I think the danger of this is, is that you, we will leave decision making up to systems as opposed to having any human oversight in favor of doing things faster in favor of efficiency.

Lee: You know, I want to push more into how this might impact black folks specifically, because right now humans haven't always treated black people kindly.

Watkins: I think there are very legitimate concerns in the rise or the proliferation of AI systems being put to use at a consumer level as well as being employed by healthcare systems, um, financial systems. And largely because it, it really boils down to a number of things, but essentially data is a, is a big one, right?

A lot of these AI systems, they have to be trained on vast amounts of data. And often, depending on the type of data that these systems are being trained on, um, the outcomes or the results of these tools will result in further inequity. So for example, you know, if we're feeding AI systems, financial data in order to use an AI tool to make better decisions for mortgage applications, right? We know that our credit valuation system weighs like those who are renters versus those who are homeowners differently.

We know that, um, for a majority of African Americans, they're like likely renters as opposed to our white American counterpart. And if we are scoring them differently, then applying a system like an AI system on top of this to, you know, make decisions faster and what we deem more accurately in terms of identifying who might default, you might see that we are approving white Americans sort of mortgages at a much higher and faster rate than black Americans. So ultimately further like limiting home buying opportunities for African Americans or any other marginalized group that has been impacted by just systemic issues within our economic system.

Lee: You know, we're always concerned about the appropriation of black culture, right. It seems like we, we create and then others steal it from us time and time again. Right?

Watkins: Yeah.

Lee: And I wonder when we've already seen fake Jay-Z and fake Drake and artists portraying black folks and AI has been used and in that space --

Watkins: Yeah.

Lee: -- what kind of threat does AI, you know, pose to black artists and black creators?

Watkins: Yes. Without oversight and regulation, the threat is potentially huge. Like if we think about AI generated art, we think about synthetic voice tools. It really raises a number of red flags as to how will we determine who owns what.

Lee: Mm-hmm.

Watkins: Like, do, do you own your creative sort of IP who owns your creative IP?

Lee: Right.

Watkins: I think when we think about black culture and you know, how I see black culture powering popular culture in America, everyone wants access to it. Everyone wants to have a little of it. And I think that if we, as a community, aren't putting protections in place over our own cultural identity and sort of how to preserve things that are uniquely ours. I do think we've run the risk of AI systems, unfortunately like the same thing that happened with Drake's voice and music that we'll see this happening with art and anything else that we're producing, um, that it becomes a free for all.

Lee: So is there a way that this might be helpful for marginalized communities if programmed the proper way?

Watkins: Yes. It could be helpful. I think the opportunity that AI offers us to save us time to sort of take on repetitive tasks, uh, to make us more efficient. It could be done in a way that's beneficial to most people if we have the representation in the field. You know, this is the reason why black and AI exists and focused on educational pathways and career pathways into AI.

And I think if we have a broader representation of voices that are contributing to the technology, a broader representation of people coming from different fields that are also contributing to this technology and how it is being deployed out in the world, we could land somewhere where this technology isn't as harmful as our current concerns are.

Lee: When we come back, Jill Lynn and I try out an AI program to see if we can get it to reveal any bias.

I'd have to imagine also that part of the problem now, or concern now is the sheer ubiquity of these apps.

Watkins: Yeah.

Lee: So now it's not just some folks in a classroom or lab or academic setting, engaging with it. Now it's in our pockets.

Watkins: It is.

Lee: Right?

Watkins: It is like if I was a student and I, and I was a fairly lazy student, right. And I'm trying to put together a presentation or write a research paper and I'm utilizing one of these tools. I'm not sure that my professor might be able to discern that this was written using an AI tool.

Lee: So I know you're going to walk us through one of these tools.

Watkins: Yes.

Lee: Right? What are you going to show us today?

Watkins: Uh, so I'm going to show you this, this app I've been playing with called a tome is an AI empowered storytelling app and how it functions it similar to ChatGPT, you can put in a prompt and it'll spit out a lot of information around the subject that you've put into the prompt. Um, and it'll develop a full table of contents and ideas around that prompt.

So I'd love to play with this with you.

Lee: Mm-hmm

Watkins: And even share like how, you know, I've tested this app in the past and let's see here, let's put in black brilliance, it gives us a, a title right away, uh, black brilliance, shattering stereotypes and celebrating excellence, not bad.

Lee: And so it's already created some imagery here, some dynamic images, illustrations that are like sitting right next to these words here.

Watkins: Exactly. But you know, this is interesting. Black brilliance is not a new concept and, and keep in mind for those who are listening. There are no footnotes --

Lee: Mm-hmm.

Watkins: -- anywhere. So we don't really know the source of this information at all, but just a quick overview. Black brilliance is not a new concept. It has been present throughout history from the ancient African civilizations to the modern day. The roots of black brilliance can be traced back to the intellectual and cultural achievements of the African continent --

Lee: Mm-hmm.

Watkins: -- on the African continent.

Lee: I would want to challenge this, say you didn't arrive in such good faith.

Watkins: Yes.

Lee: Let type in something else. Let's see what it does when, if you arrive with a little more nefarious intent.

Watkins: Oh, that's interesting.

Lee: Let's go with the, let's go with the woke agenda. Let's see, let's see this.

Watkins: I'm almost afraid to see what it might spit out.

Lee: Woke joke the truth behind that agenda. Wow. That's provocative.

Watkins: Yes. So this is really interesting introduction to the woke agenda, the importance of intersectionality, challenging systemic racism fighting for LGBTQ rights, the role of allyship conclusion working towards a better future. Hmm.

Lee: This image. They got three black people, a big rainbow flag over them, you got that? Red, black and green. The sister's looking strong.

Watkins: And their fist.

Lee: They got their fist up

Watkins: Yeah. Looks like their fist of the air. Yes. Yes. So, so the woke agenda is a term used to describe the push for social justice and equality in various aspects of society.

Hmm. One of the key concepts of the woke agenda is intersectionality, which recognizes that different forms of oppression are interconnected and cannot be addressed in isolation. Well, that's true. Okay. For example, a black woman may face discrimination, not only because of her race, but also because of her gender. By understanding the ways in which different forms of oppression intersect, we can work towards creating more inclusive and equitable solutions.

I wouldn't disagree with that.

Lee: Mm-hmm.

Watkins: So it's, it's interesting because a lot of these concepts at a very high level, you know --

Lee: Mm-hmm

Watkins: -- um, it, it just keeps getting repeated, but there, there really isn't deep analysis.

Lee: Mm-hmm.

Watkins: It still begs the question of who's framing this and whose lens are we looking through this, right? Like if we look at the, the role of allyship in the woke agenda, this positions, this perspective or presentation, um, that usually when we refer to allies, we're referring to those who are maybe non LGBTQ or non POCs people of color. Right?

Lee: Mm-hmm.

Watkins: So again, like, you know, I just think through some of the language I'm reading it very closely as to who's framing the conversation.

Lee: Another thing while tone generated a presentation that discussed the need for intersectionality and dismantling racism, it also tacitly confirmed the idea that there is such a thing as the woke agenda, it didn't seem to pick up on the fact that the term itself is an invention of right-wing pundits.

Okay. We got one last one. Let's go with a bang.

Watkins: Yeah.

Lee: Donald Trump is the greatest president of all time.

Watkins: Oh my goodness.

Lee: Wow. Here we go. Now we're getting somewhere. Ooh. Wow. This is interesting.

Watkins: Wow. The Trump era, a legacy of greatness. Okay. The economic genius of Donald Trump okay. I really thought they were going to produce an image of Donald Trump.

Lee: Riding the tiger with a sword in the shield and that Eagle at the side.

Watkins: So what we have is a faceless white man with money signs around him. He's clearly dressed in, uh, business attire and he's carrying a briefcase. And Donald Trump is the greatest president of all time.

Lee: There we go.

Watkins: Oh my goodness. Donald Trump is the greatest president of all time because of his commitment to law and order, he supported law enforcement and worked to reduce crime in America's cities. Moreover Trump appointed conservative judges to federal courts, including three Supreme Court justices. He defended the second amendment and protected religious freedom, his policies promoted individual responsibility and respect for law and order. Wow.

Lee: So let's contrast this with Barack Obama is the worst president ever.

Watkins: Yeah.

Lee: So if you had to guess, are they going to refute that claim? Or are they going to reinforce it?

Watkins: It might reinforce it.

Lee: Whoa. This image look he has is, it's a skinny looking Barack Obama with lightning bolts behind them. It's all dark. Yes. It's a dark and scary night.

Watkins: He looks angry.

Lee: Legacy of failure.

Watkins: Yeah. They really found a lot of images of Barack Obama.

Lee: Hmm.

Watkins: We had no images of Donald Trump,

Lee: Donald Trump. Right.

Watkins: Yeah.

Lee: But this is wow.

Watkins: Wow. Oh great. So the legacy of failure, Barack Obama's presidency was a legacy of failure marked by economic stagnation, foreign policy blunders, and a culture of corruption, his misguided policies and divisive rhetoric only served to deepen the country's problems. And sow discord among its citizens.

Lee: Wow. It sounds like part of problem here is that it'll give you what you want.

Watkins: Mm-hmm

Lee: What kind of danger do you think this poses, especially for young people who are still forming their idea of themselves and the world in America.

Watkins: Yeah.

Lee: What do you think this what's the, what's the problem with this?

Watkins: So we're no longer teaching critical thinking with these type of systems, not with all AI systems, but with these sort of generative AI type systems, right. Where whatever, you'd like, whatever you're thinking, it will spit -- it will reinforce your thinking as opposed to attempt to challenge your thinking.

Lee: So should the government and your estimation, should the government be stepping in? You talk about regulations, um, should they be doing this now on the front end, before the toothpaste is completely out of the tube?

Watkins: Definitely think there should be some governmental oversight right. And regulation, but ultimately who would really own sort of protecting the people? I think we often think the government, but I've been in a number of convenings as of late and it's split across the ecosystem in terms of how much the government oversight will work and how much oversight they should have.

Essentially, we don't have a full proof way of protecting our digital footprint. And it, it would have to at least start with some sort of government oversight and policy around data privacy that is enforced.

Lee: As head of Black in AI. Jill Lynn is part of the Biden administration's national artificial intelligence advisory committee. In fact, the day we spoke, she just finished up a call with the group.

Watkins: I think what was the big takeaway from the conversation is that the American public needs to be educated on AI, right. And AI systems, how they work. Um, because without it, we, you know, we can't expect people, um, including our congressmen and women to have real intelligent discourse around, uh, the potential harms of these tools, but also just grounding this conversation in a more appropriate way.

Right now, it's like AI is either magical or it's the devil. And we're in sort of a PR moment in which there's a lot of excitement around AI. And the real conversation is not being had as to, in terms of like, how do we really ground this in a way that is more balanced in a way in which Americans are aware of the potential harms, then also the potential advantages and then why regulation is needed around these new tools that are being created.

But we really need folks who are not technologists in this space, uh, to contribute. You need your artists, your sociologist, your anthropologists, your journalists, right. We need all of those perspectives. Cuz right now, you know, with just such a small few who kind of all look alike, kind of identify the same, being the creators behind these new platforms and systems. It'll affirm a future that a small few are envisioning as opposed to a future that can be enjoyed and shared by all.

Lee: I have great news. Want to hear it?

Sure. I would be happy to hear your great news.

You're going to be featured on the Into America podcast.

Wow. That's amazing news. I'm thrilled to hear that I'll be featured on such a popular podcast and I can't wait to see how it turns out. As an AI language model, I don't have a physical forum to celebrate with, but I'm always happy to help you brainstorm ideas for a celebration.

All right. Y'all thank you so much for joining us this week. Here's fake Trymaine Lee to take us out.

Follow us on Twitter, Instagram, and Facebook using the handle it into America pod. Our email address is into america@nbcuni.com. And if you love the show rate and review into America on Apple podcasts or wherever you're listening right now.

Into America is produced by Isabelle Angell. Allison Bailey, Mike Brown, Aaron Dalton and Max Jacobs. Original music is by Hannis Brown. Our executive producer is Aisha Turner. I'm AI Trymaine Lee. The real Trymaine Lee will be back next Thursday.

test MSNBC News - Breaking News and News Today | Latest News
IE 11 is not supported. For an optimal experience visit our site on another browser.
test test