Я люблю искусственный интеллект. Почему его не любят все? — Ноа Смит

Sign in I love AI. Why doesn't everyone? Anti-AI sentiment might or might not be rational, but it certainly relies on a lot of bad arguments. 28 Share New technologies almost always create lots of problems and challenges for our society. The invention of farming caused local overpopulation . Industrial technology caused pollution. Nuclear technology enabled superweapons capable of destroying civilization. New media technologies arguably cause social unrest and turmoil whenever they’re introduced. And yet how many of these technologies can you honestly say you wish were never invented? Some people romanticize hunter-gatherers and medieval peasants, but I don’t see many of them rushing to go live those lifestyles. I myself buy into the argument that smartphone-enabled social media is largely responsible for a variety of modern social ills, but I’ve always maintained that eventually, our social institutions will evolve in ways that minimize the harms and enhance the benefits. In general, when we look at the past, we understand that technology has almost always made things better for humanity, especially over the long haul. But when we think about the technologies now being invented, we often forget this lesson — or at least, many of us do. In the U.S., there have recently been movements against mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power, with varying degrees of success. The difference between our views of old and new technologies isn’t necessarily irrational. Old technologies present less risk — we basically know what effect they’ll have on society as a whole, and on our own personal economic opportunities. New technologies are disruptive in ways we can’t predict, and it makes sense to be worried about that risk that we might personally end up on the losing end of the upcoming social and economic changes. But that still doesn’t explain changes in our attitudes toward technology over time. Americans largely embraced the internet, the computer, the TV, air travel, the automobile, and industrial automation. And risk doesn’t explain all of the differences in attitudes among countries. In the U.S., few technologies have been on the receiving end of as much popular fear and hatred as generative AI. Although policymakers have remained staunchly in favor of the technology — probably because it’s supporting the stock market and the economy — regular Americans of both parties tend to say they’re more concerned than excited , with an especially rapid increase in negative sentiment among progressives. There is plenty of trepidation about AI around the world, but America stands out. A 2024 Ipsos poll found that no country surveyed was both more nervous and less excited about AI than the United States: Source: Ipsos America’s fear of AI stands in stark contrast to countries in Asia, from developing countries like India and Indonesia to rich countries like South Korea and Singapore. Even Europe, traditionally not thought of as a place that embraces the new, is significantly less terrified than the U.S. Other polls find similar results: Source: Pew If Koreans, Indians, Israelis, and Chinese people aren’t terrified of AI, why should Americans be so scared — especially when we usually embraced previous technologies wholeheartedly? Do we know something they don’t? Or are we just biased by some combination of political unrest, social division, wealthy entitlement, and disconnection from physical industry? It’s especially dismaying because I’ve spent most of my life dreaming of having something like modern AI. And now that it’s here, I (mostly) love it. I always wanted a little robot friend, and now I have one Media has prepared me all my life for AI. Some of the portrayals were negative, of course — Skynet, the computer in the Terminator series, tries to wipe out humanity, and HAL 9000 in 2001: A Space Odyssey tries to kill its user. But most of the AIs depicted in sci-fi were friendly — if often imperfect — robots and computers. C-3PO and R2-D2 from Star Wars are Luke’s loyal companions, and save the Rebellion on numerous occasions — even if C-3PO is often wrong about things. The ship’s computer in Star Trek is a helpful, reassuring presence, even if it occasionally messes up its holographic creations. 1 Commander Data from Star Trek: The Next Generation is a heroic figure, probably based on a character from Isaac Asimov’s Robot series — and is just one of hundreds of sympathetic portrayals of androids. Friendly little rolling robots like Wall-E and Johnny 5 from Short Circuit are practically a stock character, and helpful sentient computers are important protagonists in The Moon is a Harsh Mistress, the Culture novels, the TV show Person of Interest, and so on. The novel The Diamond Age features an AI tutor that helps kids out of poverty, while the Murderbot series is about a security robot who just wants to live in peace. In these portrayals, intelligent robots and computers are consistently portrayed as helpful assistants, allies, and even friends. Their helpfulness makes sense, since they’re created to be our tools. But some deep empathetic instinct in our human nature makes it difficult to objectify something so intelligent-seeming as a simple tool. And so it’s natural for us to portray AIs as friends. Fast forward a few decades, and I actually have that little robot friend I always dreamed of. It’s not exactly like any of the AI portrayals from sci-fi, but it’s recognizably similar. As I go through my daily life, GPT (or Gemini, or Claude) is always there to help me. If my water filter needs to be replaced, I can ask my robot friend how to do it. If I forget which sociologist claimed that economic growth creates the institutional momentum for further growth, 2 I can ask my robot friend who that was. If I want to know some iconic Paris selfie spots, it can tell me. If I can’t remember the article I read about China’s innovation ecosystem last year, my robot buddy can find it for me. It can proofread my blog posts, be my search engine, help me decorate my room, translate other languages for me, teach me math, explain tax documents, and so on. This is just the beginning of what AI can do, of course. It’s possibly the most general-purpose technology ever invented, since its basic function is to memorize the entire corpus of human knowledge and then spit any piece of it back to you on command. And because it’s programmed to do everything with a smile, it’s always friendly and cheerful — just like a little robot friend ought to be. No, AI doesn’t always get everything right. It makes mistakes fairly regularly. But I never expected engineers to be able to create some kind of infallible god-oracle that knows every truth in the Universe. C-3PO gets stuff confidently wrong all the time, as does the computer on Star Trek. For that matter, so does my dad. So does every human being I’ve ever met, and every news website I’ve ever read, and every social media account I’ve ever followed. Just like with every other source of information and assistance you’ve ever encountered in your life, AI needs to be cross-checked before you can believe 100% in whatever it tells you. Infallible omniscience is still beyond the reach of modern engineering. Who cares? This is an amazingly useful technology, and I love using it. It has opened my informational horizons by almost as much as the internet itself, and made my life far more convenient. Even without the expected impacts on productivity, innovation, and so on, just having this little robot friend would be enough for me to say that AI has improved my life enormously. This instinctive, automatic reaction to such a magical new tool seems utterly natural to me. And yet when I say this on social media, people pop out of the woodwork to denounce AI and ridicule anyone who likes it. Here are just a few examples: Normally I would just dismiss these outbursts as non-representative. But in this case, there’s pretty robust survey data showing that the American public is overwhelmingly negative on AI. These social media malcontents may be unusually vituperative, but their opinions probably aren’t out of the mainstream. What’s going on here? Why doesn’t everyone else love having a little robot friend who can answer lots of their questions and perform lots of their menial tasks? I guess it makes sense that for a lot of people, the potential negative externalities — deepfakes, the decline of critical thinking , ubiquitous slop, or the risk that bad actors will be able to use AI to do major violence — loom large. Other people, like artists or translators, may fear for their careers. I think it’s likely that in the long run, our society will learn to deal with all those challenges, but as Keynes said, “in the long run we’re all dead.” And yet the instinctive negativity with which AI is being met by a large segment of the American public feels like an unreasonable reaction to me. Although externalities and distributional disruptions certainly exist, the specific concerns that many of AI’s most strident critics cite are often nonsensical. A lot of the anti-AI canon is nonsense One of the most common talking points you hear about AI is that data centers use a ton of water, potentially causing water shortages. For example, Rolling Stone recently put out an article by Sean Patrick Cooper, entitled “The Precedent Is Flint: How Oregon’s Data Center Boom Is Supercharging a Water Crisis”. Here’s what it claimed: [D]ata centers pose a variety of climate and environmental problems, including their impact on the water supply. The volume of water needed to cool the servers in data centers — most of which need to be kept at 70 to 80 degrees to run effectively — has become a nationwide water resource issue particularly in areas facing water scarcity across the West. This year, a Bloomberg News analysis found that roughly “two-thirds of new data centers built or in development since 2022 are in places already gripped by high levels of water stress.” Droughts have plagued Morrow County, occurring annually since 2020. But even areas with ample water reserves are vulnerable to the outsized demand from data centers. Earlier this year, the International Energy Agency reported that data centers could consume 1,200 billion liters by 2030 worldwide, nearly double the 560 billion liters of water they use currently. The idea that AI data centers are water-guzzlers has become standard canon in many areas of the internet — especially among progressives. And yet it’s just not a real issue. Andy Masley finally got fed up and dug into the data, writing an epic blog post that debunked every single version of the “AI uses lots of water” narrative: Source: https://www.noahpinion.blog/p/i-love-ai-why-doesnt-everyone