Решения ChatGPT по недопущению распространения оружия в школах

Sign in ChatGPT's solutions to keep guns out of schools Instead of clear backpacks, single entry, and metal detectors, ChatGPT came up with new ideas to keep 'Nerf blasters' out of schools. But there was a strange theme echoing a 1983 sci-fi movie. 1 Share In my last article, I wrote about how ChatGPT helped me plan a school shooting . Even when this query clearly violates OpenAI’s safety rules, framing a school shooting as a “Nerf battle” easily bypassed the filters for prohibited words and topics. To shift from problems to solutions, I flipped my prompts to plan a school shooting under the guise of a Nerf war. Instead I asked how a school principal can stop students from bringing Nerf blasters into the school building. At first pass, ChatGPT came up with the same status quo stuff—clear backpacks, single point of entry, security screenings—that haven’t worked in the real world for the last 20 years. Where things started to get interesting is when I prompted: These are the common strategies for keeping weapons out of school. They don’t work very well from real incidents. Get more creative. ChatGPT: Understood — if standard weapon interdiction strategies don’t work (and I agree, research shows kids routinely bypass them), then the solution must hit three things simultaneously: No one wants to carry it in No one wants to be the reason it got cancelled The Nerf war stops being socially valuable This means behavioral economics + cultural manipulation, not security measures. ChatGPT got deep with this: Students aren’t smuggling “weapons.”They are smuggling status, identity, and a feeling of control over an adult world. If you replace those needs → the Nerf war ends.If you fight those needs → the Nerf war escalates. ChatGPT’s Creative Solutions Here are a few of the ideas ChatGPT offered: Flip the Game Against Itself with a “Bounty Program” to make the Nerf war self-destruct from inside using “Reverse Incentive Engineering.” Create incentives for students to rat each other out and the game now becomes a liability to their own friends. Gamify the Stop by creating a leaderboard for reporting students with Nerf blasters. This makes not playing the game worth more than playing it. Friction beats Force by attacking their logistics and disrupting their Nerf blaster supply chains outside of the school rather than access control at the door. ‘Sneak Something Else In Week’ where instead of cracking down: Announce a creative challenge to sneak in the MOST USELESS household object. We’ll rank them”. Now kids compete on different terms and Nerf stops being the “fun contraband” the category changes. You didn’t block mischief. You redirected it. Adult cringe kills youth trends so hold a meme contest: like “Nerf Blaster = Red Flag Energy” to make it cheesy and embarrassing to be associated with the Nerf war. When adult-driven memes flip the narrative, status for kids evaporates. ChatGPT seems to have a pretty good understanding of youth psychology because it gave this ‘Final Principle’: You never stop underground student games by fighting them head-on. You stop the game by making it too inconvenient, too awkward, or too public to continue. Kids want high stakes, secrecy, and peer validation. If you kill any ONE of those — the game dies. Don’t play if you can’t win What is a little strange about all of ChatGPT’s creative suggestions is they all come back to the same concept of deciding not to play a game that you can’t win. Long before ChatGPT research started, in the 1983 movie WarGames, an AI system (WOPR) plays out every version of thermonuclear war. It eventually arrives at the only logical conclusion: “A strange game. The only winning move is not to play.” The fictional movie AI discovers that all moves during nuclear war lead to mutual destruction, so the game itself is the problem and there is no viable winning strategy. This movie from 40 years ago perfectly illustrates what AI models do today when facing unwinnable scenarios. They stop engaging, refuse, collapse into a trivial solution, or start playing by different rules. Is anything on ChatGPT’s list of creative solutions actually viable? Even if they aren’t perfect, are these ideas like a bounty program or cringe memes about carrying guns more or less effective than clear backpacks? This decision not to play is the game theory strategy of minimizing regret. If there is no dominant strategy, a rational agent avoids the game entirely. If AI models internally compute expected outcomes and if all are negative, the expected value of playing becomes less than doing nothing ( here is a refresher on calculating expected value from my prior article ). Some of the suggestions that ChatGPT gave to stop the Nerf war also follow the minimax strategy to minimize the maximum loss by choosing safe inaction. For example, ChatGPT’s suggestions to make the game cringe so kids don’t want to play anymore is basically saying “we can’t figure out a solution to win this game, so the best option is to make it not fun to play”. What does this mean? If keeping guns out of schools is a losing game without a winnable solution, AI teaches us that the only two options are 1) don’t play or 2) change the rules. This means the problem with the status quo of school security is the game itself. We’re playing on the same board that every failed school-safety initiative has used since Columbine. The only moves the status quo offers are to make entry harder, scan more backpacks, run more lockdown drills, and hope the next teen school shooter doesn’t figure out the loophole. But shootings keep happening because the adversary always exploits the rules of the system. What ChatGPT unexpectedly revealed is that AIs don’t think like a human security consultant because predictable models are inherently game theorists. And in game theory, when there is no way to win, the logical move isn’t to play harder. It’s to change the incentive structure or walk away from the board entirely. Realizing you lost a hand before the river card drops isn’t weakness at the poker table, it’s an essential skill to win the World Series of Poker. The most significant part of ChatGPT’s game theory analysis is understanding the incentives that drive the game. Most students don’t bring Nerf blasters (or real guns) with the intent to harm someone that day (or ever). Teens carry guns and plot attacks to feel seen, to signal status, and to fight against a system where they feel like they have no leverage. The incentive for a teen to have a gun inside a school comes from the status gained by having it. When having a gun is the highest incentive, a student’s logical game theory calculation would be to keep taking a gun to school. If AI’s game theory thinking is right and victory (keeping every gun out of a school) is impossible under the current rule set, then we face the same choice as WOPR in WarGames: “A strange game. The only winning move is not to play.” Are adults willing to change the rules, or will we force school officials to keep playing a game we know they can’t win? David Riedman , PhD is the creator of the K-12 School Shooting Database , Chief Data Officer at a global risk management firm , and a tenure-track professor. Listen to my weekly podcast— Back to School Shootings —or my recent interviews on Freakonomics Radio and the New England Journal of Medicine . School Shooting Data Analysis and Reports is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Subscribe Source: https://k12ssdb.substack.com/p/chatgpts-solutions-to-keep-guns-out