Tag: AI

  • Movies, Machines, and the Myth of the Button

    Movies, Machines, and the Myth of the Button

    This weekend I rewatched WarGames (1983), and then followed it up with Colossus: The Forbin Project (1970).

    Both movies sit in the same uneasy space: the fear that computers might one day take control of nuclear weapons, lock humans out of the process, and decide the fate of the world with cold logic.

    It’s a powerful idea.
    It’s also not how the real world works — and never really has.

    The Movies Get the Premise Wrong, but the Fear Right

    In these films, the danger comes from automation itself.
    The computer becomes agentic. Autonomous. Untouchable.

    In reality, nuclear command-and-control has always been deliberately human-heavy. Painfully so.

    Yes, there is automation — and more of it now than there used to be. Targeting, analysis, routing, correlation of data: those things have increasingly been handed to machines. I’m not sure I love that, to be honest.

    But validation, verification, and execution? Those still sit with people.

    Not one person.
    Many people.

    Processes exist specifically to prevent blind action.

    Movies often get this part closer than people think:

    • Turning keys
    • Pulling triggers
    • Pressing buttons
    • Reading messages back
    • Confirming again

    That is real. Because real systems rely on process, not heroics.

    Submarines, Crews, and the Human Layer

    Submarines are a good example of how misleading the “single button” myth is.

    Nothing important happens because one person decides something on a whim. It happens because a crew agrees that a process has been followed correctly.

    A message arrives.
    It’s evaluated.
    It’s questioned.
    It’s verified.

    And yes — it can be challenged.

    People imagine submarines as disconnected from reality, sealed off from the world. That’s not quite true. Crews receive information constantly — news, summaries, updates — but only what they’re fed.

    That matters.

    If the information stream says the world is unraveling, conflict is escalating, and everything aligns with an order that arrives? It may feel logical not to question it.

    But if nothing suggests global chaos — if the world seems stable — that same order might trigger doubt.

    The decision doesn’t happen in a vacuum.
    Context matters. Humans matter.

    Where AI Actually Is Different — and Why That’s Uncomfortable

    What worries people today isn’t that AI presses a button.

    It’s the idea that systems might begin processing outcomes without requiring human interaction — not execution, but judgment.

    That’s a subtler fear. And a more realistic one.

    We’re constantly told, “That will never happen.”
    But we’re also told that viruses will never escape labs… until they do.

    Tell me AI plus malicious code isn’t a possibility.
    Tell me agentic systems won’t be attempted by someone who wants to use them for harm instead of good.

    They will be.

    That doesn’t mean AI is evil. It means humans are consistent.

    AI as a Tool, Not a Replacement

    Here’s the part that gets lost in the noise.

    AI is not “just Google.”

    Google gives you the most optimized answer someone wants you to see.
    AI lets you interrogate information.

    You can:

    • Ask follow-up questions
    • Explore edge cases
    • Challenge assumptions
    • Learn faster and deeper

    That’s a big deal.

    I’ve seen this firsthand. In Six Sigma work, for example, AI integrated with tools like Excel can now run analyses that once required specialized plugins and deep statistical knowledge — and then explain the results clearly to people who never really understood what the charts meant in the first place.

    That’s not dumbing things down.
    That’s lifting people up.

    Yes, There’s a Lot of Crap Right Now

    Let’s be honest: a lot of what’s flooding the internet is garbage.

    Twenty versions of the same AI art style.
    Endless cloned aesthetics.
    “Make your profile using this exact look.”

    It’s lazy. And exhausting.

    But that’s a phase — not the destination.

    The same thing happened with websites. With social media. With digital photography.

    Eventually, the novelty fades. What’s left are people who understand the tools and people who don’t.

    And the people who understand them will move faster, think deeper, and build better things.

    The SailorJ Take

    Don’t be afraid of AI.

    Be afraid of not understanding it.

    Used well, it’s not a shortcut — it’s an amplifier.
    It doesn’t replace learning. It accelerates it.

    And unlike the movies, the real danger isn’t a machine deciding our fate.

    It’s humans refusing to stay in the loop.

    That’s where responsibility has always lived.
    That’s where it still belongs.

  • Welcome to the AltaVista Era of AI

    Welcome to the AltaVista Era of AI


    Or: Why Your Chatbot Still Sucks, and That’s Okay

    By Sailor J | SailorJ.com


    The Year Is 1998 (But Make It AI)

    If you’re feeling a weird sense of déjà vu using AI tools lately, it’s because we’ve been here before. Not literally—but metaphorically, spiritually, and, frankly, technologically.

    Remember the early internet search days? Yahoo directories? AltaVista? Lycos? Ask Jeeves in his little butler suit?

    Back then, it was all about crawling everything, indexing everything, and hoping like hell you’d get a relevant result when you typed in “how to unclog toilet using cat litter.”

    That’s where we are now—with AI.


    AI Is Living Its AltaVista Phase

    Everyone and their cousin is making a chatbot.

    • OpenAI’s ChatGPT
    • Google’s Gemini
    • Meta’s LLaMA
    • Elon’s Grok
    • Claude, Perplexity, and some weirdo thing that only runs on a Raspberry Pi at Burning Man

    They’re all big, bloated, and weirdly confident in being wrong.

    Like AltaVista in 1999, they’re impressive at first… until you actually use them for something important and end up in a hallucination rabbit hole quoting fake philosophers and citing articles that never existed.

    These models don’t know what’s real. They just know what sounds real.


    We Haven’t Had the “Google Moment” Yet

    Google didn’t win the search wars because it indexed more stuff. It won because it figured out what mattered.

    Relevance. Authority. Signal over noise.

    The same thing needs to happen with AI. Right now, it’s all noise.

    We don’t need more words—we need better judgment.

    Most current AIs are like overconfident interns with amnesia and no idea what plagiarism is.


    The Dangerous Parallel

    Remember when search went from curated to algorithmic? We stopped seeing the best content—we started seeing what the algorithm decided was best. It’s happening again.

    AI is quietly shifting from being your helpful assistant to being your informational gatekeeper.

    It’s not just answering your questions—it’s deciding what answers are available.

    That should terrify you at least a little.


    So What Now?

    We’re still in the phase where everyone’s building AI like it’s a demo at a tech fair. Bigger, flashier, faster. Nobody’s nailed the holy trinity:

    1. Trustworthy answers
    2. Contextual awareness
    3. Creative thinking instead of remixing Wikipedia and fanfiction

    We’re crawling toward the future, but we haven’t stood up yet. The AI revolution is coming—but what you’re seeing now? This is just the MySpace version.

    And just like Ask Jeeves, most of these tools are going to be ghosts in GitHub repos five years from now.


    The Sniff Test

    We don’t post techno-babble without receipts. Here’s some supporting reading: