Научная фантастика подарила нам “хороший” ИИ – теперь это определяет наше отношение к реальному ИИ | TechRadar

Is ChatGPT lying to you? Maybe, but not in the way you think That bias matters. Not just because some people might fall in love with chatbots – though that happens and is cause for concern – but more simply because everyday users may end up trusting AI’s answers more than they should. “It risks people putting too much trust in what they spit out without independently verifying the information,” Lam warns. And the cycle reinforces itself. Many AI personas are modelled on fictional tropes, sometimes even trained on them. “People feel like they are encountering fully sentient non-human Others, the kinds of artificial beings that we’ve discussed in science fiction for a very long time,” Singler says. “They are also coming into contact with personas that have been designed in the light of those existing AI stories, even trained on them. It’s not surprising that they seem like the fulfilment of those expectations.” Should sci-fi creators take more responsibility? I wanted to know how storytellers themselves think about this. “It's been odd seeing how things I thought were maybe too much of a stretch in my near-future dystopia Goldilocks , which I wrote in 2019, are starting to come true,” says author L. R. Lam. She tells me she’s become more cautious about how she writes AI – and more aware of the stereotypes. “In one of my works in progress, I am being very mindful about the AI tropes we've seen in fiction,” she explains. “There’s an interesting layer of gender roles: how male vs female AIs are coded in literature, or how AI is treated as something between or beyond gender. We sometimes see AI or robots as a temptress (Ex Machina, Her), and other times as an omniscient god.” Lam also points out that the way AI is created in fiction can give us the wrong impression about power. “We often show one inventor of an AI, when in reality science is so much more collaborative than that,” she says. But while it’s important for creators to reflect on these tropes, Singler is wary of putting too much responsibility on fiction itself. “Science fiction is not to blame for this, not any more than our existing spiritual ideas are to blame for people also finding god through Large Language Models," she explains. "It is all part of our dual nature as both storytellers and the ones being told the stories.” Stories, in other words, are part of being human. They guide us, inspire us, and sometimes mislead us. But the responsibility for how we respond lies with us. The bigger risk may be that today’s tech builders aren’t reflecting on the stories they’ve grown up with. Or treating sci-fi as the cautionary material it often is. Too often, they chase the shiny promise of new technology while ignoring its deeper warnings about inequality and power. And in doing so, we risk sleepwalking into the very dystopias those stories tried to warn us about. The choice ahead Fiction, then, isn’t prophecy. It’s a mirror, reflecting our hopes and fears through artificial beings. Whether that makes us reckless or wiser depends on how much we’re willing to question the stories we’ve been told and act accordingly. In the end, we get to decide. “We are sold two versions: AI will save us, AI will damn us," L. R. Lam says. "But in reality, it doesn't need to do either unless we let it. And I think it's more interesting if we save ourselves, don’t you?” Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too. You might also like Source: https://www.techradar.com/ai-platforms-assistants/science-fiction-sold-us-good-ai-now-its-shaping-how-we-treat-real-ai