Maiya Kreamer and Hannah Parcells December 9, 2025 Collegian | Alli Adams For decades, artificial intelligence in popular culture has looked like sentient androids, self-aware supercomputers and the occasional robot uprising. From “2001: A Space Odyssey” to “Ex Machina,” sci-fi has shaped a public imagination in which AI thinks, feels and acts just like a human being. Today’s AI looks far less like a sentient robotic being and far more like a powerful autocomplete. People use it daily to gather information and streamline tasks at home, school and work. The gap between science fiction and scientific reality when it comes to AI is vast — and often misunderstood. So what actually counts as artificial intelligence today? And how close are we to the kinds of systems imagined on screen? In scientific research, the term “artificial intelligence” has broadened to describe any nonhuman system capable of performing tasks that appear to require human learning or decision-making. The two major categories that function within those parameters are artificial general intelligence and generative artificial intelligence. AGI is the stuff of sci-fi and is a concept that has been debated by those involved in AI research, as it remains entirely hypothetical. Some say it includes any program that can learn, while others say there is more individuality required to make the cut. The most common way people understand AGI is as a system that can complete all of the tasks and functions that humans can, to the point where it is nearly impossible to distinguish it from a human. The technology could replace people in their potential, specifically the full spectrum of emotional and moral decision-making. “I don’t know when or if we can ever get there,” said Hamed Qahri-Saremi, associate professor of computer information systems at Colorado State University. “These other models that we have, like the generative AI, … are much smarter autocorrect machines.” Generative AI is what the public interacts with today: tools like ChatGPT, image generators and writing assistants. These systems are powered not by consciousness but by vast datasets and statistical models that predict what words or images are likely to come next. The leap from autocorrect to full-sentence generation can look like intelligence, but it is not cognitive understanding . At the heart of most modern AI systems are artificial neural networks , a particular type of machine learning technique loosely inspired by the structure of the human brain. These networks learn patterns between variables and make predictions that humans might have trouble with. This includes what kinds of posts users tend to like, how language typically flows or what pixels form a coherent image. This is the backbone of everything from generative AI tools to social media algorithms , like those of Instagram or X. However, the ability to create ideas from gathering data is not the same as comprehension. Qahri-Saremi pointed out ethical concerns in how people use AI algorithms, which can lead to misunderstandings. An example would be people who use applications such as ChatGPT to ask moral or philosophical questions and expect sound advice, something AI is nowhere near being able to comprehend. While AI can generate emotional language, it does not experience emotion and is not capable of understanding the emotional context behind human experiences. “Even when it’s talking about emotions, it’s not really experiencing it,” Qahri-Saremi said. “These are numbers changing to terms. … These are very nicely trained algorithms.” The system is not reasoning; it’s predicting and generating the most likely response based on data. That distinction is essential and one of the biggest misconceptions about AI today. As generative AI has become more prevalent in the mainstream, many people remain suspicious of the technology. A poll by Pew Research Center published in September asked Americans how they view AI and its impacts on society. It concluded that 50% of Americans are more concerned than excited about the growing presence of AI, and only 10% said they were more excited than concerned. Some of that worry comes from the fear of AI destroying jobs and relationships or its ability to cause environmental catastrophe. Another part of the uncertainty may come from the unprecedented speed at which AI was adopted into various markets. “It became the most rapidly adopted consumer technology of all time,” said Boris Nikolaev, an associate professor in CSU’s College of Business. “(It reached) 100 million users in just two months.” Nikolaev described the process as disorienting. Unlike other transformational technological advancements of the past, AI tools are cheap, accessible and improving quickly, making them feel more magical and, at the same time, threatening for many. Despite Hollywood’s famous speculations, a world of fully conscious machines remains far out of reach. Scientists do not agree on whether AGI is possible, let alone imminent. The systems shaping our lives today are powerful but fundamentally narrow: They generate text, identify patterns and predict outcomes. They are not self-aware, not moral agents and not replacements for human emotion or judgment. If AGI were ever achieved, it might be a bigger leap than anyone is prepared for. “(AGI) would raise some profound questions about the nature of entrepreneurship, agency, value creation and human work,” Nikolaev said. Reach Maiya Kreamer and Hannah Parcells at science@collegian.com or on social media @RMCollegian . View Story Comments Source: https://collegian.com/articles/science/sci-features/2025/12/category-science-science-fictions-artificial-general-intelligence-is-not-ai-of-today/