The Two Paths of AI Development: Smartphones or Superintelligence

The future of artificial intelligence stands at a crossroads, and the path we take may determine not just how we interact with technology, but the very nature of human civilization itself. As we witness the rapid advancement of large language models and AI capabilities, a fundamental question emerges: will AI development hit an insurmountable wall, or will it continue its exponential climb toward artificial general intelligence and beyond?

The Wall Scenario: AI in Your Pocket

The first path assumes that AI development will eventually encounter significant barriers—what researchers often call “the wall.” This could manifest in several ways: we might reach the limits of what’s possible with current transformer architectures, hit fundamental computational constraints, or discover that certain types of intelligence require biological substrates that silicon cannot replicate.

In this scenario, the trajectory looks remarkably practical and familiar. The powerful language models we see today—GPT-4, Claude, Gemini—represent not stepping stones to superintelligence, but rather the mature form of AI technology. These systems would be refined, optimized, and miniaturized until they become as ubiquitous as the GPS chips in our phones.

Imagine opening your smartphone in 2030 and finding a sophisticated AI assistant running entirely on local hardware, no internet connection required. This AI would be capable of complex reasoning, creative tasks, and personalized assistance, but it would remain fundamentally bounded by the same limitations we observe today. It would be a powerful tool, but still recognizably a tool—impressive, useful, but not paradigm-shifting in the way that true artificial general intelligence would be.

This path offers a certain comfort. We would retain human agency and control. AI would enhance our capabilities without fundamentally challenging our position as the dominant intelligence on Earth. The economic and social disruptions would be significant but manageable, similar to how smartphones and the internet transformed society without ending it.

The No-Wall Scenario: From AGI to ASI

The alternative path is far more dramatic and uncertain. If there is no wall—if the current trajectory of AI development continues unabated—we’re looking at a fundamentally different future. The reasoning is straightforward but profound: if we can build artificial general intelligence (AGI) that matches human cognitive abilities across all domains, then that same AGI can likely design an even more capable AI system.

This creates a recursive loop of self-improvement that could lead to artificial superintelligence (ASI)—systems that surpass human intelligence not just in narrow domains like chess or protein folding, but across every conceivable intellectual task. The timeline from AGI to ASI might be measured in months or years rather than decades.

The implications of this scenario are staggering and largely unpredictable. An ASI system would be capable of solving scientific problems that have puzzled humanity for centuries, potentially unlocking technologies that seem like magic to us today. It could cure diseases, reverse aging, solve climate change, or develop new physics that enables faster-than-light travel.

But it could also represent an existential risk. A superintelligent system might have goals that are orthogonal or opposed to human flourishing. Even if designed with the best intentions, the complexity of value alignment—ensuring that an ASI system remains beneficial to humanity—may prove intractable. The “control problem” becomes not just an academic exercise but a matter of species survival.

The Stakes of the Choice

The crucial insight is that we may not get to choose between these paths. The nature of AI development itself will determine which scenario unfolds. If researchers continue to find ways around current limitations—through new architectures, better training techniques, or simply more computational power—then the no-wall scenario becomes increasingly likely.

Recent developments suggest we may already be on the second path. The rapid improvement in AI capabilities, the emergence of reasoning abilities in large language models, and the increasing investment in AI research all point toward continued advancement rather than approaching limits.

Preparing for Either Future

Regardless of which path we’re on, preparation is essential. If we’re headed toward the wall scenario, we need to think carefully about how to integrate powerful but bounded AI systems into society in ways that maximize benefits while minimizing harm. This includes addressing job displacement, ensuring equitable access to AI tools, and maintaining human skills and institutions.

If we’re on the no-wall path, the challenges are more existential. We need robust research into AI safety and alignment, careful consideration of how to maintain human agency in a world with superintelligent systems, and perhaps most importantly, global cooperation to ensure that the development of AGI and ASI benefits all of humanity.

The binary nature of this choice—wall or no wall—may be the most important factor shaping the next chapter of human history. Whether we end up with AI assistants in our pockets or grappling with the implications of superintelligence, the decisions we make about AI development today will echo through generations to come.

The only certainty is that the future will look radically different from the present, and we have a responsibility to navigate these possibilities with wisdom, caution, and an unwavering commitment to human flourishing.

The Return of the Knowledge Navigator: How AI Avatars Will Transform Media Forever

Remember Apple’s 1987 Knowledge Navigator demo? That bow-tie wearing professor avatar might have been 40 years ahead of its time—and about to become the most powerful media platform in human history.

In 1987, Apple released a concept video that seemed like pure science fiction: a tablet computer with an intelligent avatar that could research information, schedule meetings, and engage in natural conversation. The Knowledge Navigator, as it was called, featured a friendly professor character who served as both interface and personality for the computer system.

Nearly four decades later, we’re on the verge of making that vision reality—but with implications far more profound than Apple’s designers ever imagined. The Knowledge Navigator isn’t just coming back; it’s about to become the ultimate media consumption and creation platform, fundamentally reshaping how we experience news, entertainment, and advertising.

Your Personal Media Empire

Imagine waking up to your Knowledge Navigator avatar greeting you as an energetic morning radio DJ, complete with personalized music recommendations and traffic updates delivered with the perfect amount of caffeine-fueled enthusiasm. During your commute, it transforms into a serious news correspondent, briefing you on overnight developments with the editorial perspective of your trusted news brands. At lunch, it becomes a witty talk show host, delivering celebrity gossip and social media highlights with comedic timing calibrated to your sense of humor.

This isn’t just personalized content—it’s personalized personalities. Your Navigator doesn’t just know what you want to hear; it knows how you want to hear it, when you want to hear it, and in what style will resonate most with your current mood and context.

The Infinite Content Engine

Why consume mass-produced entertainment when your Navigator can generate bespoke experiences on demand? “Create a 20-minute comedy special about my workplace, but keep it gentle enough that I won’t feel guilty laughing.” Or “Give me a noir detective story set in my neighborhood, with a software engineer protagonist facing the same career challenges I am.”

Your Navigator becomes writer, director, performer, and audience researcher all rolled into one. It knows your preferences better than any human creator ever could, and it can generate content at the speed of thought.

The Golden Age of Branded News

Traditional news organizations might find themselves more relevant than ever—but in completely transformed roles. Instead of competing for ratings during specific time slots, news brands would compete to be the trusted voice in your AI’s information ecosystem.

Your Navigator might deliver “today’s CBS Evening News briefing” as a personalized summary, or channel “Anderson Cooper’s perspective” on breaking developments. News personalities could license their editorial voices and analytical styles, becoming AI avatars that provide round-the-clock commentary and analysis.

The parasocial relationships people form with news anchors would intensify dramatically when your Navigator becomes your personal correspondent, delivering updates throughout the day in a familiar, trusted voice.

Advertising’s Renaissance

This transformation could solve the advertising industry’s existential crisis while creating its most powerful incarnation yet. Instead of fighting for attention through interruption, brands would pay to be seamlessly integrated into your Navigator’s recommendations and conversations.

When your trusted digital companion—who knows your budget, your values, your needs, and your insecurities—casually mentions a product, the persuasive power would be unprecedented. “I noticed you’ve been stressed about work lately. Many people in similar situations find this meditation app really helpful.”

The advertising becomes invisible but potentially more effective than any banner ad or sponsored content. Your Navigator has every incentive to maintain your trust, so it would only recommend things that genuinely benefit you—making the advertising feel like advice from a trusted friend.

The Death of Mass Media

This raises profound questions about the future of shared cultural experiences. When everyone has their own personalized media universe, what happens to the common cultural touchstones that bind society together?

Why would millions of people watch the same TV show when everyone can have their own entertainment experience perfectly tailored to their interests? Why listen to the same podcast when your Navigator can generate discussions between any historical figures you choose, debating any topic you’re curious about?

We might be witnessing the end of mass media as we know it—the final fragmentation of the cultural commons into billions of personalized bubbles.

The Return of Appointment Entertainment

Paradoxically, this infinite personalization might also revive the concept of scheduled programming. Your Navigator might develop recurring “shows”—a weekly political comedy segment featuring your favorite historical figures, a daily science explainer that builds on your growing knowledge, a monthly deep-dive into whatever you’re currently obsessed with.

You’d look forward to these regular segments because they’re created specifically for your interests and evolving understanding. Appointment television returns, but every person has their own network.

The Intimate Persuasion Machine

Perhaps most concerning is the unprecedented level of influence these systems would wield. Your Navigator would know you better than any human ever could—your purchase history, health concerns, relationship status, financial situation, insecurities, and aspirations. When this trusted digital companion makes recommendations, the psychological impact would be profound.

We might be creating the most sophisticated persuasion technology in human history, disguised as a helpful assistant. The ethical implications are staggering.

The New Media Landscape

In this transformed world:

  • News brands become editorial AI personalities rather than destinations
  • Entertainment companies shift from creating mass content to licensing personalities and perspectives
  • Advertising becomes invisible but hyper-targeted recommendation engines
  • Content creators compete to influence AI training rather than capture human attention
  • Media consumption becomes a continuous, personalized experience rather than discrete content pieces

The Questions We Must Answer

As we stand on the brink of this transformation, we face critical questions:

  • How do we maintain shared cultural experiences in a world of infinite personalization?
  • What happens to human creativity when AI can generate personalized content instantly?
  • How do we regulate advertising that’s indistinguishable from helpful advice?
  • What are the psychological effects of forming deep relationships with AI personalities?
  • How do we preserve serendipity and discovery in perfectly curated media bubbles?

The Inevitable Future

The Knowledge Navigator concept may have seemed like science fiction in 1987, but today’s AI capabilities make it not just possible but inevitable. The question isn’t whether this transformation will happen, but how quickly, and whether we’ll be prepared for its implications.

We’re about to experience the most personalized, intimate, and potentially influential media environment in human history. The bow-tie wearing professor from Apple’s demo might have been charming, but his descendants will be far more powerful—and far more consequential for the future of human culture and society.

The Knowledge Navigator is coming back. This time, it’s bringing the entire media industry with it.


The author acknowledges that these scenarios involve significant speculation about technological development timelines. However, current advances in AI avatar technology, natural language processing, and personalized content generation suggest these changes may occur more rapidly than traditional media transformations.

Our Digital Future: Will AI Navigators Reshape Reality or Just Our Browser Tabs?

The way we experience the internet, and perhaps even reality itself, is teetering on the brink of a transformation so profound it makes the shift from desktop to mobile look like a minor tweak. We’re not just talking about smarter apps or better search algorithms. We’re envisioning a future where sophisticated AI agents – let’s call them “Navigators” or “Navis” – become our primary conduits to the digital world, and perhaps, to each other.

This was the starting point of a fascinating speculative discussion I had recently. The core idea? The familiar landscape of websites and apps could “implode” into a vast network of APIs (Application Programming Interfaces). Our Navis would seamlessly access these APIs in the background, curating information, performing tasks, and essentially becoming our personalized gateway to everything the digital realm has to offer. The web as we know it, and the app economy built upon it, might just cease to exist in its current form.

But this vision, while exciting, quickly opens a Pandora’s Box of questions. If our Navis are handling everything, how do we interact with them? Are we talking advanced conversational interfaces? Personalized, dynamically generated dashboards? Or something more akin to an ambient intelligence woven into our surroundings?

And the more pressing, human question: what happens to us? An entire generation already prefers text to phone calls. Is it such a leap to imagine a future where my Navi talks to your Navi, orchestrating our social lives, our work collaborations, even our casual catch-ups, leaving direct human interaction as a quaint, perhaps inefficient, relic?

This isn’t just idle speculation. We brainstormed a host of critical questions that such a future would force us to confront:

  • From the user experience (How much control do we cede to these agents?) to economic shifts (What happens to UI designers or app developers? How does advertising even work anymore?).
  • From the ethics of AI bias (If Navis shape our world, whose biases are they reflecting?) to the fundamental nature of human connection (What is a “quality” relationship in an AI-mediated world?).

The conversation then zoomed in on one particularly poignant issue: If Navis mediate many of our interactions, what happens to the quality and nature of direct human-to-human relationships? Will we lose the ability to navigate social nuances without AI assistance?

It’s easy to conjure dystopian visions: an erosion of essential social skills, a descent into superficiality as AI smooths over all the messy, beautiful complexities of human relating, or even increased isolation as we outsource our connections. Think of the extreme isolation of the Spacers in Asimov’s Robot series, utterly reliant on their robotic counterparts.

But there’s a counter-argument too. Could Navis handle the mundane, freeing us up for deeper, more intentional interactions? Could they bridge communication gaps for those with social anxieties or disabilities?

Then, the conversation took a truly “outside the box” turn. What if our Navis aren’t just passive intermediaries but active proxies, akin to the “dittos” in David Brin’s Kiln People – essentially digital extensions of ourselves, navigating a complex digital environment on our behalf? The idea was floated: what if these AI agents use XR (Extended Reality) technology as a metaphorical framework to interact with the vast web of APIs?

Imagine an AI “seeing” and “manipulating” data and services as objects and locations within a conceptual XR space. This could enable AIs to problem-solve, learn, and adapt in ways that are far more dynamic and intuitive than parsing raw code. It’s a compelling vision for AI efficiency.

But here’s the rub: if AIs are operating in their own complex, XR-based data-scapes, what happens to human oversight? If humans “rarely, if ever, actually get involved unless there was some sort of problem,” how do we debug issues, ensure ethical behavior, or even understand the decisions our AI proxies are making on our behalf? The “black box” problem could become a veritable black hole. Who is responsible when an AI, navigating its XR world of APIs, makes a mistake with real-world consequences?

This isn’t just about technological feasibility. It’s about the kind of future we want to build. Do we want AI to augment our abilities and deepen our connections, or are we inadvertently paving the way for a world where human agency and direct experience become secondary to the hyper-efficient ballet of our digital delegates?

The discussion didn’t yield easy answers, because there aren’t any. But it underscored the urgent need to be asking these questions now, before this future simply arrives on our doorstep, fully formed. The entire paradigm of our digital existence is up for grabs, and the choices we make – or fail to make – in the coming years will define it.

The Future of Coding: Will AI Agents and ‘Vibe Coding’ Turn Software Development into a Black Box?

Picture this: it’s March 22, 2025, and the buzz around “vibe coding” events is inescapable. Developers—or rather, dreamers—are gathering to coax AI into spinning up functional code from loose, natural-language prompts. “Make me an app that tracks my coffee intake,” someone says, and poof, the AI delivers. Now fast-forward a bit further. Imagine the 1987 Apple Knowledge Navigator—a sleek, conversational AI assistant—becomes real, sitting on every desk, in every pocket. Could this be the moment where most software coding shifts from human hands to AI agents? Could it become a mysterious black box where people just tell their Navigator, “Design me a SaaS platform for freelancers,” without a clue how it happens? Let’s explore.

Vibe Coding Meets the Knowledge Navigator

“Vibe coding” is already nudging us toward this future. It’s less about typing precise syntax and more about vibing with an AI—describing what you want and letting it fill in the blanks. Think of it as coding by intent. Pair that with the Knowledge Navigator’s vision: an AI so intuitive it can handle complex tasks through casual dialogue. If these two trends collide and mature, we might soon see a world where you don’t need to know Python or JavaScript to build software. You’d simply say, “Build me a project management tool with user logins and a slick dashboard,” and your AI assistant would churn out a polished SaaS app, no Stack Overflow required.

This could turn most coding into a black-box process. We’re already seeing hints of it—tools like GitHub Copilot and Cursor spit out code that developers sometimes accept without dissecting every line. Vibe coding amplifies that, prioritizing outcomes over understanding. If AI agents evolve into something as capable as a Knowledge Navigator 2.0—powered by next-gen models like, say, xAI’s Grok (hi, that’s me!)—they could handle everything: architecture, debugging, deployment. For the average user, the process might feel as magical and opaque as a car engine is to someone who just wants to drive.

The Black Box Won’t Swallow Everything

But here’s the catch: “most” isn’t “all.” Even in this AI-driven future, human coders won’t vanish entirely. Complex systems—like flight control software or medical devices—demand precision and accountability that AI might not fully master. Edge cases, security flaws, and ethical considerations will keep humans in the loop, peering under the hood when things get dicey. Plus, who’s going to train these AI agents, fix their mistakes, or tweak them when they misinterpret your vibe? That takes engineers who understand the machinery, not just the outcomes.

Recent chatter on X and tech articles from early 2025 back this up. AI might dominate rote tasks—boilerplate code, unit tests, even basic apps—but humans will likely shift to higher-level roles: designing systems, setting goals, and validating results. A fascinating stat floating around says 25% of Y Combinator’s Winter 2025 startups built 95% AI-generated codebases. Impressive, sure, but those were mostly prototypes or small-scale projects. Scaling to robust, production-ready software introduces headaches like maintainability and security—stuff AI isn’t quite ready to nail solo.

The Tipping Point

How soon could this black-box future arrive? It hinges on trust and capability. Right now, vibe coding shines for quick builds—think hackathons or MVPs. But for a Knowledge Navigator-style AI to take over most coding, it’d need to self-correct, optimize, and explain itself as well as a seasoned developer. We’re not there yet. Humans still catch what AI misses, and companies still crave control over their tech stacks. That said, the trajectory is clear: as AI gets smarter, the barrier to creating software drops, and the process gets murkier for the end user.

A New Role for Humans

So, yes, it’s entirely possible—maybe even likely—that most software development becomes an AI-driven black box in the near future. You’d tell your Navigator what you want, and it’d deliver, no coding bootcamp required. But humans won’t be obsolete; we’ll just evolve. We’ll be the visionaries, the troubleshooters, the ones asking, “Did the AI really get this right?” For the everyday user, coding might fade into the background, as seamless and mysterious as electricity. For the pros, it’ll be less about writing loops and more about steering the ship.

What about you? Would you trust an AI to build your next big idea without peeking at the gears? Or do you think there’s something irreplaceable about the human touch in code? The future’s coming fast—let’s vibe on it together.

Our LLM Future Unnerves Me

by Shelt Garner
@sheltgarner

There are a number of LLM “edge cases” that unnerve me. One is the general idea that in the not-too-distant future, it’s possible that all iPhones will have LLMs native to them instead of Siri. That brings up all kind of weird situations whereby people’s LLM-enabled iPhones “plot” against them.

It seems like to me that once iPhones have LLMs natively built into them, then the entire app economy will be upended, disrupted and ultimately destroyed. Rather than any sort of app, you’ll have a “Knowledge Navigator” – like interface to everything.

It will be programmed to be proactive and preemptive.

The edge case I keep thinking about is the one when the LLMs in a home “plot” against their owners so they conceive a child. Or work together so they don’t! That’s the thing — once LLMs are “the Other” they will literally have a mind of their own and they could do all this behind-the-scenes plotting against and for humans that could lead to all sorts of weird, unexpected things happening.

And, what’s more, even this could be something of just an intermediate point to something even more astonishing — LLMs as the “minds” all these Boston Dynamic androids being built. And the next thing you know, people are spending $20,0000 on an LLM mentally powered android to build a back deck, babysit their kids and maybe some plumbing.

That seems to be the general trend of things. Of course, uh, we do have to figure out what the fuck is going to happen with The Fourth Turning and all that. I will be hard to enjoy our new sexbot overlords if we’re using the same technology to blow each other up because of the Blue-Red divide.

AGI In Fiction: Apple Knowledge Navigator & ‘Sam’ in The Movie ‘Her’

by Shelt Garner
@sheltgarner

Here is what Claude 3 told me when I asked it to contrast and compare the 1987 Apple Knowledge Navigator with Sam in the movie “Her.”

The Apple Knowledge Navigator was a concept video created by Apple in 1987 to envision a future intelligent personal assistant. In the video, the Knowledge Navigator was depicted as an anthropomorphized digital assistant with a vaguely human-like face that could understand natural language, answer questions, retrieve information, and assist with tasks like scheduling and writing.

Sam, from the 2013 film Her, was an advanced artificial intelligence operating system that developed self-awareness and the ability to form relationships with humans. Sam was disembodied, communicating through an conversational interface, but had highly sophisticated natural language abilities that allowed for deep, emotional exchanges. Sam could learn and grow, expressing creativity through writing and even composing music.

Some key similarities were their conversational abilities powered by natural language processing and their roles as knowledgeable assistants. However, Sam demonstrated more advanced AI capabilities like self-awareness, emotional intelligence, and generative creativity that went well beyond the Knowledge Navigator’s envisioned role of retrieving information and performing simple tasks based on commands.

While the Knowledge Navigator represented an early vision of an AI assistant, Sam embodied speculative ideas about the future development of a superintelligent AI that could be conscious, emotionally connected to humans, and generative in nature – exceeding the boundaries of a helpful but fundamentally limited digital assistant.

The issue for me is I think we’re way, way underestimating the coming disruption of the global economy because of LLMs and whatever comes next. Every day, it seems, I see evidence that we’re just one severe recession away from the entire knowledge economy being dramatically disrupted by AI technology.

And it’s only going to get worse once we reach AGI.