My people, hello!
I’m very excited to share a recent conversation I had with Albert Fox Cahn, an expert on human rights and surveillance technology — and someone who has been sounding the alarm on the looming and ever-present dangers of artificial intelligence technology.
These days, it feels like we’re hurtling toward a future filled with AI. And while that excites some folks, others — like Cahn — are warning the world about what a robot-driven society would likely mean.
In the 40-minute interview below, I ask the founder of the Surveillance Technology Oversight Project about potential hiding places for the robot apocalypse, pop culture references to teach the public about AI, his ongoing rivalry with New York Mayor Eric Adams, the use of AI to police abortion, and much more.
The transcript below has been edited for length and clarity.
JJ: I’ll start with an easy question: When it comes to the looming robot apocalypse, are you more of a “hide in the mausoleum” kind of guy or a “hide in the bunker” kind of guy? And should we talk about weapons of choice, baseball bats and what have you? Or should we just willfully give in to our inevitable robot overlords?
AFC: Yeah, here’s the problem. I was already focusing so much on my zombie survival plan from Covid, and now I have to update it to deal with the robot apocalypse, and creating the homemade electromagnetic pulse takes more time than people realize. But the truth is that, as much as we’re thinking right now about this sort of existential threat that general artificial intelligence could pose to us years — or, more likely, decades or centuries — down the road, there are really mundane threats being posed by artificial intelligence today, things that are having a very real impact on people’s lives. There are ways that artificial intelligence is compounding racism, expanding barriers to employment for people with disabilities, accelerating police injustice. These aren’t hypotheticals for the future; these are AI threats that are happening already. And so most days when I’m not looking for a low-cost bunker somewhere in New Zealand, I tend to be thinking: What are the ways we can combat those threats today?
JJ: New Zealand … writing that down. You have to have a sense of foresight to understand how AI could potentially impact our future. How’d you come into this infatuation with emerging technology?
AFC: I often joke that I was bitten by a radioactive surveillance camera as a child. But the truth is, I was just a giant nerd. As a child growing up in New York City, I had two hobbies: I spent a lot of time building computers, and I spent a lot of time protesting the New York Police Department. And as a high school student and a college kid, especially in the aftermath of 9/11, I started to see the really terrifying rise of surveillance technology by local police. I would go to a protest and see officers pointing camcorders at me — just photographing me, recording me, simply for exercising my First Amendment right to protest in public peacefully. And so that early experience really imprinted on me this idea that there’s something really dangerous for democracy when this type of surveillance goes unchecked.
Then I went to law school, became a corporate lawyer, realized I was terrible at being a corporate lawyer and wanted to get away from it. And I started to work at a Muslim civil rights organization around the same time that then-candidate Trump became a nominee. And what I saw was just alarming and heartbreaking. There were things like the Muslim ban, and the surge in hate crimes, and all the ways that Trump was fanning the flames of intolerance and Islamophobia. I saw the ways that the surveillance apparatus we’d built up in the decades since 9/11 had transformed into this really terrifying tracking system that could reach into the most intimate aspects of our lives. And that’s what inspired me to found Surveillance Technology Oversight Project back in 2019.
JJ: One of the biggest proponents for hurtling toward the AI future is New York Mayor Eric Adams. What are some of the more troubling AI implementations that he has pursued?
AFC: New York is emblematic of what happens when you bring together a lot of money, a lot of enthusiasm and a lot of half-baked ideas for what technology can accomplish. And Eric Adams has really been the champion of half-baked technology ideas, including some recent examples with AI. The mayor has backed the deployment of these really dystopian surveillance robots — a four-legged drone called Digidog, which basically look like they’re straight out of a “Black Mirror” episode. And then we saw a 400-pound — what I lovingly call a trash can on wheels — being rolled out with its array of cameras. This is something you could imagine possibly being helpful, but the mayor rolled it out in Times Square — in the most heavily surveilled part of New York City, where there are cameras on basically every wall. So with a lot of these technology stunts from the mayor, there’s been real costs and real causes for concern, but no clear upside.
And perhaps the goofiest and most easily lampooned idea was the mayor’s use of AI to create deepfakes of himself. As part of New York City’s outreach for an upcoming event, he used his voice to train a robot to sound like him — because he wanted to have foreign language access, to pretend that he spoke other languages. Now, foreign language access is huge; New York invests a huge amount in making sure we can reach New Yorkers in whatever language they speak. But what was key here was the mayor wanted to pretend — wanted to lie to New Yorkers and mislead them — that he could speak these languages himself.
He paid tens of thousands of dollars to make these automated calls with an AI model that was trained to sound like him. Not only is this creepy, but it also had a number of errors in the translation. And this is happening at a time when there are thousands of people on the city’s payroll who are native speakers of all of these languages. This wasn’t about effective communication — this was about visibility for the mayor ahead of what will likely be a contested primary.
JJ: That speaks to the political dubiousness that’s possible with AI, right? It shows we’re creeping toward the more overt political manipulation.
AFC: There’s a reason why politicians invest a ton of time and effort in trying to master another language. It’s because a lot of people really appreciate it if you’re willing to invest that much time and energy to try to become closer with their community. And when you use city dollars to try to mislead people to think you’ve done that when you haven’t, that’s really troubling to me.
JJ: Has your organization spoken with legislators about what sort of policies ought to be implemented to stop some of these nefarious uses of AI?
AFC: We’ve been working really hard with legislators up in Albany to put together a framework for legislation to regulate AI here in New York state. I’m calling for a model that isn’t focused on trying to regulate specific types of technologies, but shifting the burden whenever AI is used. We’d change the rule that a plaintiff bears the burden whenever they go to court. In a normal case, if you want to say someone trespassed on your front lawn, you bear the burden of showing all of the elements of your case. But in AI litigation, that creates this huge roadblock because AI often is a black box; we don’t know how a lot of machine learning works. So right now, large companies and government agencies kind of have a “get out of jail free” card any time they get sued for using AI.
JJ: So under this proposal, technology companies would have to show judges, perhaps, that their algorithms don’t discriminate?








