I saw Joe Weisenthal’s tweet the other day—the one where he basically says he’s tired of the “learn AI now or get left behind” preaching, because if it’s truly game-changing, there’s not much you can do anyway, and besides, there’s zero skill or learning curve involved. You can just pick it up whenever. It’s a vibe a lot of people are feeling right now: exhaustion with the hype, plus the honest observation that using these tools is stupidly easy.
He’s got a point on the surface level. Right now, in early 2026, the entry bar is basically on the floor. Type a sentence into ChatGPT, Claude, Gemini, or whatever, and you get useful output 80% of the time without any special training. No need to learn syntax, install anything, or understand the underlying models. It’s more like asking a really smart friend for help than “learning a skill.” And yeah, if AI ends up being as disruptive as some claim, the idea of proactively upskilling to stay ahead can feel futile—like trying to outrun a tsunami by jogging faster.
But I think the take is a little too fatalistic, and it undersells something important: enjoying AI right now isn’t just about dodging obsolescence—it’s about amplifying what you already do, in ways that feel genuinely rewarding and productive.
I use these tools constantly, not because I’m afraid of being left behind, but because they make my days noticeably better and more creative. They help me brainstorm faster, refine ideas that would otherwise stay stuck in my head, summarize long reads so I can absorb more in less time, draft outlines when my brain is foggy, and even poke at philosophical rabbit holes (like whether pocket AI agents might flicker with some kind of momentary “aliveness”) without getting bogged down in rote work. It’s not magic, but it’s a multiplier: small inputs yield bigger, cleaner outputs, and that compounds over time.
The fatalism skips over that personal upside. Sure, the tools are easy enough that anyone can jump in later. But the longer you play with them casually, the more you develop an intuitive sense of their strengths, blind spots, and weird emergent behaviors. You start chaining prompts naturally, spotting when an output is hallucinating or biased, knowing when to push back or iterate. That intuition isn’t a “skill” in the traditional sense—no certification required—but it’s real muscle memory. It turns the tool from a novelty into an extension of how you think.
And if the future does involve more agentic, on-device, or networked AI (which feels increasingly plausible), that early comfort level gives you quiet optionality: customizing how the system nudges you, auditing its suggestions, or even resisting when the collective patterns start feeling off. Latecomers might inherit defaults shaped by early tinkerers (or corporations), while those who’ve been messing around get to steer their slice a bit more deliberately.
Joe’s shrug is understandable—AI evangelism can be annoying, and the “doom or mastery” binary is exhausting. But dismissing the whole thing as zero-curve / zero-agency misses the middle ground: using it because it’s fun and useful today, not because you’re racing against some apocalyptic deadline. For a lot of us, that’s reason enough to keep the conversation going, not wait until “later.”