Jeff Hewitt is a science fiction writer in Los Angeles. “We’re cooked.” When it comes to artificial intelligence — and the changes it will soon unleash — the sentiment is everywhere. VEO, Google’s generative video tool? According to those using it: We’re cooked . ChatGPT Agents? Comments say: We’re cooked . Grok Companions? Oh, we’re definitely cooked . These services — capable of creating realistic videos, completing tasks online, and flirting with their users — all promise to automate what were once human occupations. Soon, AI-generated media will be indistinguishable from reality. “Agentic” bots will draft documents formerly reserved for white-collar desk jockeys. And lonely people will rely on robot avatars for artificial friendship. As little as 10 years ago, these developments seemed like science fiction. Now the future seems to be sprinting toward us with unprecedented speed, threatening to undermine not only the way we live but how we value ourselves. And yet, we’ve been here before. Advertisement Fifty-five years ago this summer, Alvin Toffler’s “Future Shock” was released. In the book, the self-trained social scientist argued that rapid technological change was pushing our culture in strange new directions. Jobs would be eliminated. Societies would be reorganized. Identities would be tested and reshaped. Although he couldn’t foresee modern AI, Toffler’s work remains prophetic. But “Future Shock” is more than just prescient. It’s also an instruction manual for managing change. Toffler realized that scientists and engineers could rarely predict the consequences of their own inventions. If humanity was to adapt to accelerating change, it would need artists and scholars to ask “why” as technologists worked out the “how.” Together, they could provide bold visions of what was to come. Unfortunately, a half-century later, such a collective effort to design the future of our dreams still seems like science fiction. Advertisement In 1965, Toffler wrote an essay titled “The Future as a Way of Life,” in which he claimed that the rate of technological progress was leading to a kind of social illness. He dubbed this illness “future shock” — a reference to the anxiety and perceived helplessness of “culture shock” that tourists sometimes experience overseas, but brought on by “the premature arrival of the future.” But unlike the wandering tourist, the sufferer of future shock could never go home. Like it or not, the future was coming — and faster than people realized. To illustrate how, Toffler divided the previous 50,000 years of human history into 62-year lifespans, placing himself in humanity’s “800th lifetime.” We’re now on the cusp of the 801st. It’s a simple heuristic, but it proves eye-opening. More than 700 lifetimes passed before humanity could write. Automobiles appeared only in the last three. And, as when Toffler was writing, many of the material goods we rely on were invented within our current lifetime. Personal computers, wireless networks, disposable electronics — these simply didn’t exist a short lifetime ago. Humanity, in other words, had reached a stage of exponential growth. What seemed state-of-the-art yesterday would be obsolete by tomorrow. Today, exponential growth is the animating principle of Silicon Valley. Many in the tech scene cite it as they unironically discuss downloading human consciousness , or achieving immortality , or the motives of god-like “ superintelligences .” Usually, these are wildly optimistic visions of the future , where founders’ businesses create an explosion of wealth and well-being. But for Toffler, exponential growth had a dark side. When new technologies disrupt old ways, the future becomes uncertain. People must constantly adapt, even as each adaptation carries the promise of more uncertainty. And as these changes accelerate, people must process more and more information to make effective decisions. Advertisement But where would they get this information? And how would they deal with its increasing ambiguity? Toffler wasn’t sure they could, without help. He coined the term “information overload” to describe the “choking sense of complexity” that came with the pressure to adapt. Pushed to their adaptive limits, people experiencing future shock would display “erratic swings” in mood and interest. Overwhelmed, many would turn to “programmed decisions” that were “routine, repetitive, and easy to make.” Toffler thought these programmed decisions might even extend to a person’s sense of self. Identity and nostalgia would become coping mechanisms for those shocked by accelerating change. By aligning themselves with rigid cultural narratives, people could repeat simple explanations rather than ask themselves difficult questions. Over time, Toffler warned, this desire for simplicity might prove all consuming. Scientific disillusionment would spread as people turned on their intellectual elites. Faced with a world they didn’t understand, citizens would ignore expert opinions, preferring answers that confirmed how they already felt. If this sounds familiar, it should. There are stretches where “Future Shock” reads as if it’s addressing our current era rather than the late 1960s. But it’s Toffler’s concern about automation that most echoes modern anxieties. Toffler recognized that automation would demand “drastic changes in the types of skills” required in life. He even speculated that people might need “new technological aids” to manage these changes. Today, we’re surrounded by such aids. Phones, watches, and home speakers promise to make us happier, healthier, and more productive. In fact, many developments since the turn of the century have sought to accommodate accelerating change: Computerized recordkeeping emerged to handle vast, disparate workforces. New communications pipelines were laid to deal with global supply chains. Social media helped isolated people form connections online. Advertisement Yet, many of us still feel future-shocked. As AI automates tasks that once felt like the pinnacle of humanity — the visual arts, music, and even scientific discovery — some are left wondering where they’ll fit in. With AI-fueled layoffs already making headlines, each new model seems to usher in a more uncertain world. And these models may worsen another symptom of future shock: the “blurring of the line between illusion and reality.” Not long ago, Toffler’s ideas fell out of fashion. The world, it seemed, had adapted just fine. In 2015, on the 40th anniversary of “Future Shock,” futurist Stuart Candy told NPR that accelerating change didn’t seem to be “driving people crazy.” A piece in Forbes even claimed that the internet had made us happier and more connected . Personal filters could manage our information diets. If people felt overloaded, most were “still wise enough to use the power of the ‘off’ button to gain some peace.” In hindsight, it’s clear that no one was turning off. Online communities became echo chambers, and those personal filters contributed to the very “distortions of reality” that Toffler feared. He couldn’t predict the details, but his assessment was still correct: Technology would “widen the gap between what we believe and what really is.” Today, disinformation is rife on social media. Confirmation bias has become a daily affirmation, our feeds dominated by the kind of talking heads that Toffler termed “vicarious people.” And as online spaces slip into a slopscape of AI content, these shortcomings will only become more apparent. Virtual influencers already attract millions of followers. Bot farms amplify hot-button issues, stoking controversy and engagement. Deepfakes and cheaply produced AI videos may eventually erode our trust in imagery itself. Advertisement Once we’re aware of how technology — and AI in particular — can alter our perceptions of reality, how can we keep ourselves anchored? How can we imagine a better future when the advancements that might bring it come with so many downsides? Alvin Toffler wasn’t a Luddite. He knew change couldn’t be stopped. Technological progress had brought immense prosperity, even as it pushed our limits. But society would need tools to manage its psychological effects. To start, people should be braced with “common skills needed for human communication and social integration.” Personalized support networks — spanning families, professional organizations, and vocational schools — could guide people through disruption. Learning would need to become a lifelong pastime. But rather than emphasizing technical knowledge, education should prioritize insight and imagination: the kind of skills that would remain useful even as science advanced. Unsurprisingly, Toffler didn’t see the sciences as a way to relieve future shock. Instead, he called for greater attention to “philosophy and logic,” and advocated for a national program of “social futurism.” Filmmakers, novelists, and musicians would work side by side with engineers, chemists, and physicists to illustrate potential futures for the public. Ultimately, people would need “sweeping, visionary ideas” about society itself. And who was qualified to provide those visions? For Toffler, there was an obvious (if unorthodox) answer: science fiction writers. Toffler was skeptical of his own predictive power. He wrote that “no serious futurist deals in ‘predictions.’” For him, that was the remit of “television oracles and newspaper astrologers.” It might surprise you to learn that the luminaries of science fiction have also scoffed at their ability to predict the future. Advertisement William Gibson has said that science fiction’s forecasts are “ almost always wrong .” Ray Bradbury was less interested in guessing at the future than “preventing” it. And Ursula K. LeGuin even said that sci-fi “wasn’t about the future.” Instead, it was a “thought experiment.” It’s often said that good science fiction holds a lens up to the present and magnifies it. But, despite LeGuin’s claim, the best science fiction is also about the future. It’s not just about where we’re going, but about where we could go if we had the means and the will. And this latter category serves as a guide for the next generation — not so much as prediction but as inspiration . Pick a random tech founder and you’ll find that they’ve referenced science fiction at some point. Elon Musk names landing pads after ships from Iain M. Banks’s “Culture” series. Mark Zuckerberg’s “metaverse” was pulled straight from Neal Stephenson’s “Snow Crash.” Sam Altman is reportedly “ obsessed ” with the movie “Her,” directed by Spike Jonze. But these entrepreneurs gloss over their sci-fi mythologies. They’re interested in the technologies in these stories, rather than the humans (or robots, or aliens) that face the resulting challenges. Societal consequences take a back seat when growth, innovation, and the next funding round are at the wheel. It’s one of life’s great ironies that science fiction authors have inspired so many tech executives, because there’s a fundamental difference in their intentions. Whereas founders aspire to power and wealth, authors aspire to truth, holding out hope that some wealth might come along with it. In the years Alvin Toffler spent writing “Future Shock,” LeGuin penned “The Lathe of Heaven.” Frank Herbert released “Dune.” Philip K. Dick asked, “Do Androids Dream of Electric Sheep?” And Kurt Vonnegut published “Slaughterhouse-Five.” All of these novels explore our perceptions of time, reality, and identity, and how the three intersect. And while “Future Shock” is largely forgotten, these stories remain relevant. If Toffler were alive today, he would probably be fine with being a footnote next to these fictional works. To cope with future shock, he knew that we would need to “meet invention with invention.” Who better to do that than those who invent galaxies for a living? The best science fiction books immerse their readers in both the mundane and the sublime, enlisting us as collaborators in imagining how our societies may change, for better and for worse. As the authors guide us through their dreamed-up futures, they prepare us to accept what once seemed impossible — even when nostalgia feels more reassuring. Still, we’re a far cry from the creative “utopia factories” that Toffler imagined. But some writers are picking up the slack. Authors like Kim Stanley Robinson offer hopeful — if difficult — visions of the future. His characters struggle against immense environmental and technological problems, and only succeed by marshaling both in the service of humanity. In fact, his recent “Ministry for the Future” is reminiscent of Toffler’s goals. Mary Murphy, the head of the novel’s titular ministry, leads an international effort to mitigate climate change, forcing the world to confront the long-term effects of its technologies. We haven’t found our Mary Murphy. And we may never find real-life counterparts for the engineers, artists, and revolutionaries who fill the pages of our science fiction. But that shouldn’t stop us from finding inspiration in their creators’ flawed predictions. Toffler knew that prediction didn’t need to be exact to be useful. When it comes to imagining the future, we should be daring. Above all, he wanted us to consider the consequences of our inventions before it was too late. And science fiction’s greatest goal, like Toffler’s, is to save us from our own devices. Those of us in the sci-fi community should all hope we’re up to the task. Because if we’re not, we’re cooked. Follow Us Source: https://www.bostonglobe.com/2025/10/26/opinion/alvin-toffler-future-shock-ai/