My people, hello!
I’m very excited to share a recent conversation I had with Albert Fox Cahn, an expert on human rights and surveillance technology — and someone who has been sounding the alarm on the looming and ever-present dangers of artificial intelligence technology.
These days, it feels like we’re hurtling toward a future filled with AI. And while that excites some folks, others — like Cahn — are warning the world about what a robot-driven society would likely mean.
In the 40-minute interview below, I ask the founder of the Surveillance Technology Oversight Project about potential hiding places for the robot apocalypse, pop culture references to teach the public about AI, his ongoing rivalry with New York Mayor Eric Adams, the use of AI to police abortion, and much more.
The transcript below has been edited for length and clarity.
JJ: I’ll start with an easy question: When it comes to the looming robot apocalypse, are you more of a “hide in the mausoleum” kind of guy or a “hide in the bunker” kind of guy? And should we talk about weapons of choice, baseball bats and what have you? Or should we just willfully give in to our inevitable robot overlords?
AFC: Yeah, here’s the problem. I was already focusing so much on my zombie survival plan from Covid, and now I have to update it to deal with the robot apocalypse, and creating the homemade electromagnetic pulse takes more time than people realize. But the truth is that, as much as we’re thinking right now about this sort of existential threat that general artificial intelligence could pose to us years — or, more likely, decades or centuries — down the road, there are really mundane threats being posed by artificial intelligence today, things that are having a very real impact on people’s lives. There are ways that artificial intelligence is compounding racism, expanding barriers to employment for people with disabilities, accelerating police injustice. These aren’t hypotheticals for the future; these are AI threats that are happening already. And so most days when I’m not looking for a low-cost bunker somewhere in New Zealand, I tend to be thinking: What are the ways we can combat those threats today?
JJ: New Zealand ... writing that down. You have to have a sense of foresight to understand how AI could potentially impact our future. How’d you come into this infatuation with emerging technology?
AFC: I often joke that I was bitten by a radioactive surveillance camera as a child. But the truth is, I was just a giant nerd. As a child growing up in New York City, I had two hobbies: I spent a lot of time building computers, and I spent a lot of time protesting the New York Police Department. And as a high school student and a college kid, especially in the aftermath of 9/11, I started to see the really terrifying rise of surveillance technology by local police. I would go to a protest and see officers pointing camcorders at me — just photographing me, recording me, simply for exercising my First Amendment right to protest in public peacefully. And so that early experience really imprinted on me this idea that there’s something really dangerous for democracy when this type of surveillance goes unchecked.
Then I went to law school, became a corporate lawyer, realized I was terrible at being a corporate lawyer and wanted to get away from it. And I started to work at a Muslim civil rights organization around the same time that then-candidate Trump became a nominee. And what I saw was just alarming and heartbreaking. There were things like the Muslim ban, and the surge in hate crimes, and all the ways that Trump was fanning the flames of intolerance and Islamophobia. I saw the ways that the surveillance apparatus we’d built up in the decades since 9/11 had transformed into this really terrifying tracking system that could reach into the most intimate aspects of our lives. And that’s what inspired me to found Surveillance Technology Oversight Project back in 2019.
JJ: One of the biggest proponents for hurtling toward the AI future is New York Mayor Eric Adams. What are some of the more troubling AI implementations that he has pursued?
AFC: New York is emblematic of what happens when you bring together a lot of money, a lot of enthusiasm and a lot of half-baked ideas for what technology can accomplish. And Eric Adams has really been the champion of half-baked technology ideas, including some recent examples with AI. The mayor has backed the deployment of these really dystopian surveillance robots — a four-legged drone called Digidog, which basically look like they’re straight out of a “Black Mirror” episode. And then we saw a 400-pound — what I lovingly call a trash can on wheels — being rolled out with its array of cameras. This is something you could imagine possibly being helpful, but the mayor rolled it out in Times Square — in the most heavily surveilled part of New York City, where there are cameras on basically every wall. So with a lot of these technology stunts from the mayor, there’s been real costs and real causes for concern, but no clear upside.
And perhaps the goofiest and most easily lampooned idea was the mayor’s use of AI to create deepfakes of himself. As part of New York City’s outreach for an upcoming event, he used his voice to train a robot to sound like him — because he wanted to have foreign language access, to pretend that he spoke other languages. Now, foreign language access is huge; New York invests a huge amount in making sure we can reach New Yorkers in whatever language they speak. But what was key here was the mayor wanted to pretend — wanted to lie to New Yorkers and mislead them — that he could speak these languages himself.
He paid tens of thousands of dollars to make these automated calls with an AI model that was trained to sound like him. Not only is this creepy, but it also had a number of errors in the translation. And this is happening at a time when there are thousands of people on the city’s payroll who are native speakers of all of these languages. This wasn’t about effective communication — this was about visibility for the mayor ahead of what will likely be a contested primary.
JJ: That speaks to the political dubiousness that’s possible with AI, right? It shows we’re creeping toward the more overt political manipulation.
AFC: There’s a reason why politicians invest a ton of time and effort in trying to master another language. It’s because a lot of people really appreciate it if you’re willing to invest that much time and energy to try to become closer with their community. And when you use city dollars to try to mislead people to think you’ve done that when you haven’t, that’s really troubling to me.
JJ: Has your organization spoken with legislators about what sort of policies ought to be implemented to stop some of these nefarious uses of AI?
AFC: We’ve been working really hard with legislators up in Albany to put together a framework for legislation to regulate AI here in New York state. I’m calling for a model that isn’t focused on trying to regulate specific types of technologies, but shifting the burden whenever AI is used. We’d change the rule that a plaintiff bears the burden whenever they go to court. In a normal case, if you want to say someone trespassed on your front lawn, you bear the burden of showing all of the elements of your case. But in AI litigation, that creates this huge roadblock because AI often is a black box; we don’t know how a lot of machine learning works. So right now, large companies and government agencies kind of have a “get out of jail free” card any time they get sued for using AI.
JJ: So under this proposal, technology companies would have to show judges, perhaps, that their algorithms don’t discriminate?
AFC: With burden shifting, which is modeled after the European Union’s liability directive, we’d essentially say: Any time you allege an injury by an AI system — if someone refuses to hire you, refuses to rent you a property, refuses to approve you for a credit card, refuses to do anything else on the basis of AI — it’s going to be on the defendant to show that either AI didn’t hurt you or that the AI was operating legally.
JJ: That sounds like AI regulation with some teeth. A lot of these tech companies play their cards close to the vest, and they think of their algorithms as proprietary.
AFC: Yeah — like, I’m sorry, if your racist algorithm is proprietary; that doesn’t give you immunity. I think that this is an urgent change we need and very different than what we see at the federal level and with a lot of states around the country, where we see these nonbinding frameworks. The White House just came out with an executive order on artificial intelligence, and I can’t point to one discriminatory system, one act of AI injustice, that will be stopped because they put that pen to paper.
JJ: Right? It’s kind of a list of hopes and dreams.
AFC: Yeah, it’s listing out your hopes and dreams while ignoring the nightmares in front of you.
JJ: You’ve talked about how AI and surveillance technology can be used to police abortion in the post-Roe world. Can you give us some examples?
AFC: Yeah, this is where I would recommend that anyone with a puppy or kitten bring them close to them, because it gets really depressing to think about this. Prior to Roe v. Wade in 1973, there wasn’t a way for Texas to enforce abortion restrictions in New York state. There wasn’t a way for them to monitor who is going to a clinic, to put a Texas Ranger at every Planned Parenthood. But today, the growth of ubiquitous location tracking and other surveillance technologies makes it possible for anti-choice states to track people’s movements across state lines.
Geofence warrants are a key concern. With just one court order, you can force a company like Google to identify every single user in a specific geographic area. With a geofence warrant, you’re taking a little circle on the map and you’re turning it into a court order. And you’re saying, “Google, everyone inside that circle? We want their data, we want to see their movement.” And that can be thousands of people.
There are other tools, like reverse keyword warrants. Again, with just one court order, you can get Google to identify thousands of people by saying, “Hey Google, identify anyone who searched for the following address, or the following name, or the name of the following abortion drug.” This is a terrifying way to target telehealth providers. So the ability to use these sort of mutated court orders that have expanded over the years to conduct a digital dragnet makes me really terrified about potential criminalization of out-of-state abortion access in the future.
JJ: Some people talk about AI as the end of our world as we know it, and other people talk about it as the beginning of an exciting new world. Where do you fall on that spectrum?
AFC: I’m a historian of technology. I know history is replete with examples of people thinking that relatively mundane technologies would be the end of life as we know it. And so I’m someone who thinks that some of these AI tools can help us solve real problems. But there are also everyday ways that AI can compound injustice and fuel inequity. So when we think about the AI landscape, I worry that the doomsday scenarios — the Terminator robot and Skynet system, the Matrix and all of those things that decades of science fiction has taught us to think about as the inevitable end state of this debate — take our eyes off of the ways that people’s lives are being impacted today.
JJ: What are some books that you can recommend to our readers who are looking to learn about AI?
AFC: “1984” and “Brave New World” are classics that have really helped frame my consciousness, when I think about the interplay of technology and society. “Habeas Data” by Cyrus Farivar is one of the best books out there on the interplay of technology and policing. Ruha Benjamin’s work has been foundational in looking at the bias that’s inescapable in basically all technical systems. And Meredith Broussard’s book “More Than a Glitch” is also a really important book in analyzing the way that bias is systemically reinforced in technical design.