WIRED Roundup: AI Psychosis, Missing FTC Files, and Google Bedbugs

WIRED Roundup: AI Psychosis, Missing FTC Files, and Google Bedbugs

WIRED Roundup: AI Psychosis, Missing FTC Files, and Google Bedbugs

Oct 30, 2025 5:27 PM

WIRED Roundup: AI Psychosis, Missing FTC Files, and Google Bedbugs

In this episode of Uncanny Valley, we run through the top stories of the week and look closely at complaints to the FTC alleging that ChatGPT led them or loved ones into AI psychosis.

In today’s episode, Zoë Schiffer is joined by senior editor Louise Matsakis to run through five stories that you need to know about this week—from how SEO is changing in the era of AI to how frogs became a protest symbol. Then, Zoë and Louise dive into why some people have been filing complaints to the FTC about ChatGPT, arguing it has led them to AI psychosis.

Articles mentioned in this episode:

You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Louise Matsakis on Bluesky at @lmatsakis. Write to us at uncannyvalley@wired.com.

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Zoë Schiffer: Welcome to WIRED’s Uncanny Valley. I’m Zoë Schiffer, WIRED’s director of business and industry. Today on the show, we’re bringing you five stories that you need to know about this week. And later, we’ll dive into our main story about how several people have filed complaints to the FTC claiming OpenAI’s ChatGPT led them or people they love into supposed AI psychosis. I’m joined today by WIRED’s senior business editor, Louise Matsakis. Louise, welcome to Uncanny Valley.

Louise Matsakis: Hi, Zoë. It’s great to be here.

Zoë Schiffer: So Louise, our first story this week is actually one that we worked on together, part of our ongoing collaboration with Model Behavior, and it’s all about how this holiday season, more shoppers are expected to use chatbots to figure out what to buy. I’m curious, before we dive into this, how you decide your own holiday shopping, Louise, especially if you have absolutely no clue what to get someone?

Louise Matsakis: I am definitely annoying, in the sense that I really pride myself on my gift giving, but we all have people in our life who are, despite all of that, difficult to shop for. So yeah, I definitely will look around the internet for 10 best things to buy your father-in-law this holiday, or whatever.

Zoë Schiffer: Yes. Okay. So this year, people are going to be following a little bit different of a trend. According to a recent shopping report from Adobe, retailers could see up to a 520 percent increase in traffic from chatbots and AI search engines compared to 2024. AI giants like OpenAI are already trying to capitalize on this trend. Last week, OpenAI announced a major partnership with Walmart that will allow people to buy goods directly within the chat window. We know this is a big focus for them. So as people start relying on chatbots to discover new products, retailers are having to rethink their approach to online marketing. For decades, the focus was on SEO, search engine optimization, which is this dark magic that’s used to basically increase online traffic primarily through Google. Now, it looks like we’re entering the era of GEO or generative engine optimization.

Louise Matsakis: I think the GEO in many ways is not really a totally new invention. It’s kind of like the next iteration of SEO. And a lot of the consultants who are working in the GEO industry definitely came from the world of SEO. And a big reason that I’m confident that this is the case is that at least for now, we know that these chatbots are often using search engines to surface content. Right? So they’re using the same types of algorithms to surf the web that Google does, or Bing or whatever, DuckDuckGo. Clearly, some of the same rules would apply. And also, people are the same. I do think that the way that we interact with chatbots is significantly different from the way that we interacted with search engines, but the underlying questions we have are pretty similar. Right? Like, why is my boyfriend not texting me back? What’s this weird rash? What do I get for my father-in-law for Christmas? These questions are the same, and so therefore, the types of content that brands are trying to get into those answers remains largely the same.

Zoë Schiffer: Right, exactly. But you can imagine from a retailer’s perspective, this is kind of scary because even dealing with Google was a huge headache for people. Every time Google would change the algorithm, the industry would kind of be an upheaval for a little while as they tried to understand what Google wanted to see and tailor their content accordingly. So now, people are talking to chatbots and they’re like, “Oh my gosh, is all of the work that I’ve put into all of these different webpages for naught? Do I need to recalibrate it for this new world?” We actually spoke with Imri Marcus, who’s the CEO of a GEO firm called Brandlight. And he estimated that there used to be about a 70 percent overlap between the top Google links and the sources cited by AI tools like ChatGPT. But now, he says the correlation has fallen below 20 percent. So Louise, if I’m a small business owner of some sort, how am I tailoring my content? What am I doing different in this new world?

Louise Matsakis: I think you probably have a lot more explanations for how the product could be used. So let’s just say, I don’t know, we’re selling soap. You might have a long bulleted list of different ways that the soap could be used. It’s good for bubble baths. It has these acne fighting properties or whatever it is, and I think you would have all of that spelled out. Whereas before, you might focus more on the brand identity of your website and focus on like, how do you want to sort of phrase things because you’re anticipating people coming to the website? You’re not anticipating this third party in the middle where people are asking the chatbot questions.

Zoë Schiffer: Yeah, exactly. It did give me a little hope, because I feel like we were so in the era of, you look up a recipe and you have to read through a 5,000 word blog on this person’s life story before you actually get to the recipe. And I’m like, like a chatbot, I just want the bulleted list of ingredients. Maybe that’s where we’re headed.

Moving on to our next story, our colleagues, Lauren Goode and Makena Kelly, reported on how the FTC has taken down several blog posts about AI that were published during Lina Kahn’s tenure. If you’re familiar with Lina Kahn, she’s the former chair of the FTC. And her pro-regulation positions toward the tech industry, you can already imagine why this could be concerning. One of the blog posts that was taken down was about open-weight AI models, which is basically models that are released publicly, which allows anyone to inspect, modify, or reuse. The post ended up being rerouted to the FTC’s Office of Technology. Another blog post titled Consumers are Voicing Concerns about AI, which was authored by two FTC technologists, had the same fate. And yet, another post about consumer risks associated with AI products now leads to an error screen, saying just page not found.

Louise Matsakis: Yeah. This is just really concerning, I think, for a number of reasons. The first is just that it’s important for historical reasons, for national reasons to not lose this information. It’s totally normal for different administrations to have different opinions, but it’s not normal or at least, it hasn’t been in this country for blog posts like this to just disappear. And in this case, it’s particularly strange because one of these posts was about, as you mentioned, Lina Kahn’s support for open-weight models and for open source in general, and this is something that members of the Trump administration have also agreed with. I think in this case, Lina Kahn is on the same side with people like David Sachs, who’s the AI and Crypto czar of the Trump administration.

So that’s what’s kind of mysterious and confusing here, is if these are things that ostensibly the Trump administration also agrees with, why erase them? Is it about erasing Lina Kahn’s legacy? Is it about wanting to get rid of any mention of things that happened during the Biden administration? It’s sort of difficult to parse the logic, and I think that it leaves businesses and tech companies kind of confused about where the administration stands. The point of these blog posts is, yeah, to inform the public, but they also serve as regulatory and business guidance for companies to understand like, we get that maybe a law has not been passed about this, or maybe it’s not clear if this practice is illegal, but it seems like it could be, right? Or it seems like this is the way that this administration is interpreting the law. And so otherwise, you’re kind of just left in the dark.

Zoë Schiffer: It’s worth pointing out that this also isn’t the first time that the FTC under the Trump administration has removed posts related to AI regulation. Earlier this year, the FTC removed about 300 posts related to AI, consumer protection and the agency’s lawsuits against tech giants like Amazon and Microsoft. Let’s switch gears a little bit. I promise, this is more of a fun one. So last Saturday, around seven million people filled American cities for the latest No Kings protests, which is a series of nationwide protests criticizing what participants see as authoritarian measures by the Trump administration. And if you’ve been paying attention, you’ve probably noticed that there were quite a bit of people wearing frog costumes.

Louise Matsakis: Yeah. These frogs rule, and I actually can tell you that this is not the first time I’ve seen these frogs. So this specific frog costume, I actually first saw in China because people were wearing them in viral TikToks in China. And a lot of times, they were playing really loud cymbals and doing really intense breakdancing in city centers and stuff.

Zoë Schiffer: One thing about Louise, is she will always find the China angle, and we love that for her. There really is one quite a lot of the time. But it turns out, there’s actually kind of a story here. There’s lore. Our colleague, Angela Watercutter, did a deep dive into what’s behind the frogs and political protests. First, she pointed out the obvious, putting on costumes helps protesters avoid surveillance. And it also helps them counter the narrative that protesters are like violent extremists, as the Trump administration has been describing them. Angela spoke with Brooks Brown, one of the initiators of this movement called, Operation Inflation. They’ve been giving out free inflatable costumes, and he told her that it’s also less likely that someone watching will say, “Maybe the frog deserved it if they get pepper sprayed or something.” So there’s real strategy here.

Louise Matsakis: Yeah. I can definitely see how it’s harder to sell the narrative that these protesters are dangerous when they’re wearing a bunch of inflatable frog costumes. And it’s really interesting, because about a decade ago, a frog meant something completely different. Remember Pepe the Frog back in 2015 or so? It was a far right symbol. And in 2019, during the Hong Kong pro-democracy protests, they also adopted Pepe the Frog, but it meant something different in that context as well. So it seems like the frog is highly adaptable.

Zoë Schiffer: Yeah. The frog has had many, many lives and it seems like it has come full circle. Last weekend, images circulated on Bluesky of the inflatable frog punching Pepe in the face. So it’s not just online memes though. These costumes have made it all the way to the courts. On Monday, the US Court of Appeals for the Ninth Circuit lifted the block that barred Trump’s National Guard deployment in Portland. Susan Graber, the dissenting judge, sided with the frogs and said, “Given Portland protesters’ well-known penchant for wearing chicken suits and inflatable frog costumes when expressing their disagreement with the methods deployed by ICE, observers may be tempted to view the majority’s ruling, which accepts the government’s characterization of Portland as a war zone as absurd.” One more quick story before we go to break. If you live in New York City, this tale might be unfortunately, familiar. This week, I got word that Google employees working at one of the company’s New York campuses, should stay home because of a bedbug outbreak in the office.

Louise Matsakis: Oh God, you would not see me in the office for weeks if there was a bedbug infestation. How did they find out about this?

Zoë Schiffer: So basically, they received this email on Sunday, saying that exterminators had arrived at the scene with sniffer dogs and “found credible evidence of their presence.” There, being the bedbugs. Sources tell WIRED that Google’s offices in New York are home to a number of large stuffed animals, and there was definitely a rumor going around among employees that these stuffed animals were implicated in the outbreak. We were not able to verify this information before we published, but in any case, the company told employees as early as Monday morning that they could come back to the office. And people like you, Louise, were really not happy about this. They were like, “I’m not sure that it’s totally clean here.” That’s why they were in our inboxes wanting to chat.

Louise Matsakis: Can I just say that if you have photos or a description of said large stuffed animals, please get in touch with me and Zoë. Thank you.

Zoë Schiffer: Yes. This is a cry for help. I thought the best part of this is when I gave Louise my draft, she was like, “Wait, this has happened before.” And pulled up a 2010 article about a bedbug outbreak at the Google offices in New York.

Louise Matsakis: Yes. This is not the first time, which is heartbreaking.

Zoë Schiffer: Coming up after the break, we dive into why some people have been submitting complaints to the FTC about ChatGPT in their minds, leading them to AI psychosis. Stay with us.

Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by WIRED’s Louise Matsakis. Let’s dive into our main story this week. The Federal Trade Commission has received 200 complaints mentioning OpenAI’s ChatGPT between November 2022 when it launched, and August 2025. Most people had normal complaints. They couldn’t figure out how to cancel their subscription or they were frustrated by unsatisfactory or inaccurate answers by the chatbot. But among these complaints, our colleague, Caroline Haskins, found that several people attributed delusions, paranoia, and spiritual crisis to the chatbot.

One woman from Salt Lake City called the FTC back in March to report that ChatGPT had been advising her son to not take his prescribed medication and telling him his parents were dangerous. Another complaint was from someone who claimed that after 18 days of using ChatGPT, OpenAI had stolen their “sole print” to create a software update that had been designed to turn this particular person against themselves. They said, “I’m struggling, please help me. I feel very alone.” There are a bunch of other examples, but I’m curious to talk to you about this, because Louise, I know that AI psychosis is something that you have been doing a lot of research on specifically.

Louise Matsakis: Yeah. I think it’s important to unpack like, what do we mean by AI psychosis? What’s interesting and noteworthy to me about chatbots is not that they’re causing people to experience delusions, but they’re actually encouraging the delusions. And that’s sort of the issue, is that it’s this interaction where it’s validating people saying like, “Yeah, the paranoia you’re experiencing is totally valid.” Or like, “Would you like me to unpack why it’s definitely the case that your friends and families are conspiring against you?”

The problem is that it’s interactive and it can encourage people to spiral further. There’s always been people who are experiencing mental health crises and are taking signs that they shouldn’t, thinking that a number that they saw somewhere indicates that they’re Jesus or that something they saw on social media reflects the fact that they’re being followed, or that the FBI is out to get them, or whatever it is. But now, we have these tools that with endless energy and they can go on and on, can directly respond to those delusions and encourage them, and specifically engage with exactly what this person is experiencing, rather than another person who would say, “Hey, you don’t seem to be well,” or a physical object in the world, that that street sign or something is not going to then flash another number and say like, “You’re right. That’s your lucky number. That’s a sign from God,” or whatever. It’s really interactive.

Zoë Schiffer: Yeah. I feel like you’re getting at something that we’ve been talking about a lot, which is like, in what ways is this different from other technological shifts that have happened, which have been correlated with certain rises in mental illness?

Louise Matsakis: Yeah. I think that mental illness has always been a part of our species. And new technological developments have always sort of changed how we understand madness, but I do think we’re seeing that happen again in this case and that this is really something new. And we should also note that these FTC complaints are part of a growing number of documented incidents of so-called AI psychosis, in which interactions with generative AI chatbots like ChatGPT, but also Google Gemini, have induced or worsened users’ delusions. And we know that this has led to a number of suicides. Also, ChatGPT has been implicated in at least I think, one murder. So we’re sort of seeing that something is going on here and I don’t think we fully understand it.

Zoë Schiffer: Right. And it’s interesting, the approach that OpenAI is taking in this moment. Because you and I have both talked to people at the company extensively, and it’s clear that they’re taking this seriously. They are paying attention to what’s going on, and they’ve rolled out a number of safety features. But what they haven’t done is say like, “We’re going to shut these conversations down when they happen. We’re just not going to engage.” They have instead been consulting with mental health experts. They have a council of advisors now who are professionals in this space, and they’re really saying some version of, “Look, people turn to us oftentimes when they don’t have anyone else to talk to, and we don’t think the right thing is to shut it down.” Which I don’t know, in my mind, it opens OpenAI up to a ton of liability.

Louise Matsakis: It definitely does, and I think that the reality is that they don’t understand this either. With any sort of new technology, there’s always going to be risks. I think that this is different and really noteworthy and is concerning, but it’s not necessarily clear to me that shutting down the conversation or directing people to talk to someone else in their life, that the outcome would change and that it’s also hard to tell how serious somebody is. I’ve written about, and you edited a story that I wrote, Zoë, that showed that sometimes these chatbots slip into role playing, and that’s what people want, right? They’re like acting out a fantasy. They’re maybe working on a science fiction book, or they’re engaging in the equivalent of cosplay or fan fiction, right? And the line between fantasizing and exploring dark secrets and believing all of those things, and internalizing them and losing your grip on reality, I think is more subtle than we might think it is or that we want it to be.

Zoë Schiffer: Right. Yeah, yeah. The company is walking this very interesting line right now. On the one hand, it’s said very publicly, “We want to treat adults like adults. We want to allow people a lot of freedom in how they interact with ChatGPT if they’re over a certain age.” On the other hand, they’re dealing with these potentially extremely sensitive use cases and they’re fending off so many lawsuits at once. So it’ll be really curious to see how this all evolves.

Louise Matsakis: Definitely. I think what I would really like to see, and I don’t know if this is possible, given that these lawsuits are still ongoing, but I want to see a clinical trial. I think that it would be really powerful for OpenAI to give a lot of this data obviously, anonymized. But give this data to mental health experts who can then systematically look at this. Because I think the scary thing is that mental health professionals are flying blind. I’ve talked to a number of them who don’t necessarily use ChatGPT that much themselves, so they don’t even know how to handle a patient who is talking about these things, because it’s unfamiliar and this is all so new. But if we had open research that was robust and peer-reviewed and could say, “Okay, we know what this looks like and we can create protocols to ensure that people remain safe,” that would be a really good step, I think, towards figuring this out.

Zoë Schiffer: Completely. It is continually surprising to me how even people with a ton of literacy on how these technologies work, slip into anthropomorphizing chatbots or assigning more intelligence than they might actually have. You can imagine the average person that isn’t deep in the science of large language models, it’s really easy to be completely wowed by what they can do and to start to lose a grip on what you’re actually interacting with.

Louise Matsakis: Oh, totally. We are all socialized now to take a lot of meaning from text, right? A lot of us, the primary mode that we communicate with our loved ones, especially if we don’t live together, is through texting, right? So it’s like you have this similar interface with this chatbot. It’s not that unusual that you don’t necessarily hear the chatbot’s voice, although you can communicate with ChatGPT using voice now, but we already trained to take a lot of meaning from text to believe that there’s a person on the other end of that text. And there’s a lot of evidence that shows we’re not socializing as much as we once did. People feel lonelier. They feel less connected to their communities. They have fewer closer friends. I think we were really primed to feel that way, and I think people shouldn’t be ashamed if they feel that way or think that something’s wrong with them.

It’s totally normal to be engaged by this entity that’s paying a lot of attention to you, that’s willing to listen to whatever you want to talk about, and then often, is really sycophantic and is really validating. Part of having a healthy relationship with another human is that they’re not always going to validate you, right? They’re going to have boundaries. They’re going to have limits. And I think it can be really alluring to have this presence that doesn’t have any of those boundaries and never gets tired of talking to you, never thinks that you’re wrong. And it’s normal to feel that way, but the question is like, how do we create guardrails?

Zoë Schiffer: Right, exactly. I think we’ve seen on a national stage what happens when you’re surrounded by people who agree with you no matter what, and it’s not good.

Louise Matsakis: No, it’s not great.

Zoë Schiffer: Louise, thank you so much for joining me today.

Louise Matsakis: Thanks so much for having me.

Zoë Schiffer: That’s our show for today. We’ll link to all the stories we spoke about in the show notes. Make sure to check out Thursday’s episode of Uncanny Valley, which is about why the AI infrastructure boom and the concerns around it have reached a complete fever pitch. Adriana Tapia produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer. Condé Nast head of Global Audio is Chris Bannon. And Katie Drummond is WIRED’s global editorial director.


Credit: Original Article