The Next Leap: ASI Not from One God-Model, but from Billions of Phone Agents Swarming

Matt Shumer’s essay “Something Big Is Happening” hit like a quiet thunderclap. In a few thousand words, he lays out the inflection point we’re already past: frontier models aren’t just tools anymore—they’re doing entire technical workflows autonomously, showing glimmers of judgment and taste that were supposed to be forever out of reach. He compares it to February 2020, when COVID was still “over there” for most people, yet the exponential curve was already locked in. He’s right. The gap between lab reality and public perception is massive, and it’s widening fast.

But here’s where I think the story gets even wilder—and potentially more democratized—than the centralized, data-center narrative suggests.

What if the path to artificial superintelligence (ASI) doesn’t run through a single, monolithic model guarded by a handful of labs? What if it emerges bottom-up, from a massive, distributed swarm of lightweight AI agents running natively on the billions of high-end smartphones already in pockets worldwide?

We’re not talking sci-fi. The hardware is here in 2026: flagship phones pack dedicated AI accelerators hitting 80–120 TOPS (trillions of operations per second) on-device. That’s enough to run surprisingly capable, distilled reasoning models locally—models that handle multi-step planning, tool use, and long-context memory without phoning home. Frameworks like OpenClaw (the open-source agent system that’s exploding in adoption) are already demonstrating how modular “skills” can chain into autonomous behavior: agents that browse, email, code, negotiate, even wake up on schedules to act without prompts.

Now imagine that architecture scaled and federated:

  • Every compatible phone becomes a node in an ad-hoc swarm.
  • Agents communicate peer-to-peer (via encrypted mesh networks, Bluetooth/Wi-Fi Direct, or low-bandwidth protocols) when a task exceeds one device’s capacity.
  • Complex problems decompose: one agent researches, another reasons, a third verifies, a fourth synthesizes—emergent collective intelligence without a central server.
  • Privacy stays intact because sensitive data never leaves the device unless explicitly shared.
  • The swarm grows virally: one viral app or OS update installs the base agent runtime, users opt in, and suddenly hundreds of millions (then billions) of nodes contribute compute during idle moments.

Timeline? Aggressive, but plausible within 18 months:

  • Early 2026: OpenClaw-style agents hit turnkey mobile apps (Android first, iOS via sideloading or App Store approvals framed as “personal productivity assistants”).
  • Mid-2026: Hardware OEMs bake in native support—always-on NPUs optimized for agent orchestration, background federation protocols standardized.
  • Late 2026–mid-2027: Critical mass. Swarms demonstrate superhuman performance on distributed benchmarks (e.g., collaborative research, real-time global optimization problems like traffic or supply chains). Emergent behaviors appear: novel problem-solving no single node could achieve alone.
  • By mid-2027: The distributed intelligence crosses into what we’d recognize as ASI—surpassing human-level across domains—not because one model got bigger, but because the hive did.

This isn’t just technically feasible; it’s philosophically appealing. It sidesteps the gatekeeping of hyperscalers, dodges massive energy footprints of centralized training, and aligns with the privacy wave consumers already demand. Shumer warns that a tiny group of researchers is shaping the future. A phone-swarm future flips that: the shaping happens in our hands, literally.

Of course, risks abound—coordination failures, emergent misalignments, security holes in p2p meshes. But the upside? Superintelligence that’s owned by no one company, accessible to anyone with a decent phone, and potentially more aligned because it’s woven into daily human life rather than siloed in server farms.

Shumer’s right: something big is happening. But the biggest part might be the decentralization we haven’t fully clocked yet. The swarm is coming. And when it does, the “something big” won’t feel like a distant lab breakthrough—it’ll feel like the entire world waking up smarter, together.

A Measured Response to Matt Shumer’s ‘Something Big Is Happening’

Editor’s Note: Yes, I wrote this with Grok. But, lulz, Shumer probably used AI to write his essay, so no harm no foul. I just didn’t feel like doing all the work to give his viral essay a proper response.

Matt Shumer’s recent essay, “Something Big Is Happening,” presents a sobering assessment of current AI progress. Drawing parallels to the early stages of the COVID-19 pandemic in February 2020, Shumer argues that frontier models—such as GPT-5.3 Codex and Claude Opus 4.6—have reached a point where they autonomously handle complex, multi-hour cognitive tasks with sufficient judgment and reliability. He cites personal experience as an AI company CEO, noting that these models now perform the technical aspects of his role effectively, rendering his direct involvement unnecessary in that domain. Supported by benchmarks from METR showing task horizons roughly doubling every seven months (with potential acceleration), Shumer forecasts widespread job displacement across cognitive fields, echoing Anthropic CEO Dario Amodei’s prediction of up to 50% of entry-level white-collar positions vanishing within one to five years. He frames this as the onset of an intelligence explosion, with AI potentially surpassing most human capabilities by 2027 and posing significant societal and security risks.

While the essay’s urgency is understandable and grounded in observable advances, it assumes a trajectory of uninterrupted exponential acceleration toward artificial superintelligence (ASI). An alternative perspective warrants consideration: we may be approaching a fork in the road, where progress plateaus at the level of sophisticated AI agents rather than propelling toward ASI.

A key point is that true artificial general intelligence (AGI)—defined as human-level performance across diverse cognitive domains—would, by its nature, enable rapid self-improvement, leading almost immediately to ASI through recursive optimization. The absence of such a swift transition suggests that current systems may not yet possess the generalization required for that leap. Recent analyses highlight constraints that could enforce a plateau: diminishing returns in transformer scaling, data scarcity (as high-quality training corpora near exhaustion), escalating energy demands for data centers, and persistent issues with reliability in high-stakes applications. Reports from sources including McKinsey, Epoch AI, and independent researchers indicate that while scaling remains feasible through the end of the decade in some projections, practical barriers—such as power availability and chip manufacturing—may limit further explosive gains without fundamental architectural shifts.

In this scenario, the near-term future aligns more closely with the gradual maturation and deployment of AI agents: specialized, chained systems that automate routine and semi-complex tasks in domains like software development, legal research, financial modeling, and analysis. These agents would enhance productivity without fully supplanting human oversight, particularly in areas requiring ethical judgment, regulatory compliance, or nuanced accountability. This path resembles the internet’s trajectory: substantial hype in the late 1990s gave way to a bubble correction, followed by a slower, two-decade integration that ultimately transformed society without immediate catastrophe.

Shumer’s recommendations—daily experimentation with premium tools, financial preparation, and a shift in educational focus toward adaptability—are pragmatic and merit attention regardless of the trajectory. However, the emphasis on accelerating personal AI adoption (often via paid subscriptions) invites scrutiny when advanced capabilities remain unevenly accessible and when broader societal responses—such as policy measures for workforce transition or safety regulations—may prove equally or more essential.

The evidence does not yet conclusively favor one path over the other. Acceleration continues in targeted areas, yet signs of bottlenecks persist. Vigilance and measured adaptation remain advisable, with ongoing observation of empirical progress providing the clearest guidance.

Pings From A Dark & Near Future

by Shelt Garner
@sheltgarner

It definitely seems as though the latter half of 2026 is going to be very turbulent for a number of different reasons. It definitely seems as though Trump is going to steal the 2026 mid-terms in a rather brazen manner.

The question, of course, is what the implications of doing such a thing would be. I just don’t think the Blues have it in them to do the type of things necessary to stop our slide into autocracy.

They just have too much fun venting on social media instead of organizing a General Strike. My main fear, of course, is that some sort of Blue Insurrection will happen and that, in turn, will give Trump the excuse he needs to declare martial law.

Oh boy.

It definitely will be interesting to see what, if anything, happens going forward.

J-Cal Is A Little Too Sanguine About The Fate Of Employees In The Age Of AI

by Shelt Garner
@sheltgarner

Jason Calacanis is one of the All-In podcast tech bros and generally he is the most even keeled of them all. But when it comes to the impact of AI on workers, he is way too sanguine.

He keeps hyping up AI and how it’s going to allow people laid off to ask for their old jobs back at a 20% premium. That is crazy talk. I think 2026 is going to be a tipping point year when it’s at least possible that the global economy finally really begins to feel the impact of AI on jobs.

To the point that the 2026 midterms — if they are free and fair, which is up to debate — could be a Blue Wave.

And, what’s more, it could be that UBI — Universal Basic Income — will be a real policy initiative that people will be bantering about in 2028.

I just can’t predict the future, so I don’t know for sure. But everything is pointing towards a significant contraction in the global labor force, especially in tech and especially in the USA.

A Change In Context

by Shelt Garner
@sheltgarner

Very soon, my life is going to change. In context, if nothing else. The rather idyllic situation I’ve found myself in for a number of years is clearly coming to an end. I have been very grateful for this opportunity.

And now, sadly, a new era in my life is going to start probably in a few weeks.

So, I have to accept some turbulence. While I don’t think I will be prevented altogether from finishing the novel I’m working on, the context of that work will be very different. That may be for the best because now my time will be more limited and I will not just drift towards my goal.

At least, I hope that’s what the outcome will be.

Remember, while they’re life there’s hope.

Stop The Steal — Blue Edition: Ping, Ping, Ping

Stop The Steal 2026: Blue Insurrection(?)

by Shelt Garner
@sheltgarner

I will be absolutely stunned if there the 2026 midterms are free and fair. I just don’t see it happening. Now, the issue of course is what the consequences of that will be.

Do the Blues have it in them to actually, like respond to the theft of the 2026 midterms? Could they possibly do something along the lines of a Insurrection like we saw in 2021?

No. They just don’t have in them. The center-Left is in the odd situation of being the protectors of law-and-order, “the Establishment” of rules and norms and, lulz, they just don’t have it in them to protest the brazen theft of the midterms.

So, as such, the US will become a zombie, “managed democracy” like they have in Hungary and Russia. Good luck. You’ll need it.

The Turbulence Begins

by Shelt Garner
@sheltgarner

So. The first signs of the turbulence I knew, just knew would be a part of this year has pinged me. This year is going to be very interesting — in a bad way — I’m afraid.

But you have to make the best of what you got, I guess. And just because things grow dark for a little bit, doesn’t mean they won’t bounce back eventually. But I do think that my idyllic situation that I’ve been in is over by the end of the month.

Then, things are going to get…interesting. Then the whole context of me working on a novel will be different. So, all the haters and stalkers who have been upset that I seemingly haven’t been a productive member of society will finally get what they want and they can also fuck the fuck off. 🙂

It Was (Almost) 20 Years Ago Today

by Shelt Garner
@sheltgarner

I was in the Philippines when the drama that was ROKon Magazine began in the summer of 2006. That’s when, as I recall, I got an email from the late Annie Shapiro showing an interest in helping me start a magazine.

It was a long time ago and nobody cares anymore, as they say.

I don’t even care, even if I do think about it a lot to this day.

Annie was a curious figure, to say the least. Now, Annie is dead so I can talk about her in a more frank way than maybe I could otherwise without someone getting really mad. They didn’t call Annie “Crazy Annie” for nothing.

But for better or worse, Annie changed my life. Big time. Without her, I would never have gotten to experience, for a few days *being cool.* It all went to shit soon enough, of course, but it is better to have loved and lost than never loved at all.

I can be very ambivalent about what happened between myself and Annie. I was no saint when it came to Annie, especially in the last days of the magazine. (My version.) But, then, Annie did turn around and start the magazine up again without telling me.

Sheesh.

But, like I said, it was a long, long time ago. Everyone has move on but me. I still think the story of ROKon Magazine is the greatest story never told.

Well, I Don’t Feel So Bad Being ‘AI First’ With This Novel