Of David Brin’s ‘Kiln People’ And AI Agents

There’s a surprisingly good science-fiction metaphor for where AI agents seem to be heading, and it comes from David Brin’s Kiln People. In that novel, people can create temporary copies of themselves—“dittos”—made of clay and animated with a snapshot of their mind. You send a ditto out to do a task, it lives a short, intense life, gathers experience, and then either dissolves or has its memories reintegrated into the original. The world changes, but quietly. Most of the time, it just makes errands easier.

That turns out to be an uncannily useful way to think about modern AI agents.

When people imagine “AI assistants,” they often picture a single, unified intelligence sitting in their phone or in the cloud. But what’s emerging instead looks far more like a swarm of short-lived, purpose-built minds. An agent doesn’t think in one place—it spawns helpers, delegates subtasks, checks its own work, and quietly discards the pieces it no longer needs. Most of these sub-agents are never seen by the user, just like most dittos in Kiln People never meet the original face-to-face.

This is especially true once you mix local agents on personal devices with cloud-based agents backed by massive infrastructure. A task might start on your phone, branch out into the cloud where several specialized agents tackle it in parallel, and then collapse back into a single, polished response. To the user, it feels simple. Under the hood, it’s a choreography of disposable minds being spun up and torn down in seconds.

Brin’s metaphor also captures something more unsettling—and more honest—about how society treats these systems. Dittos are clearly mind-like, but they’re cheap, temporary, and legally ambiguous. So people exploit them. They rely on them. They feel slightly uncomfortable about them, and then move on. That moral gray zone maps cleanly onto AI agents today: they’re not people, but they’re not inert tools either. They occupy an in-between space that makes ethical questions easy to postpone and hard to resolve.

What makes the metaphor especially powerful is how mundane it all becomes. In Kiln People, the technology is revolutionary, but most people use it for convenience—standing in line, doing surveillance, gathering information. Likewise, the future of agents probably won’t feel like a sci-fi singularity. It will feel like things quietly getting easier while an enormous amount of cognition hums invisibly in the background.

Seen this way, AI agents aren’t marching toward a single godlike superintelligence. They’re evolving into something more like a distributed self: lots of temporary, task-focused “dittos,” most of which vanish without ceremony, a few of which leave traces behind. Memory becomes the real currency. Continuity comes not from persistence, but from what gets folded back in.

If Kiln People ends with an open question, it’s one that applies just as well here: what obligations do we have to the minds we create for our own convenience—even if they only exist for a moment? The technology may be new, but the discomfort it raises is very old. And that’s usually a sign the metaphor is doing real work.

Author: Shelton Bumgarner

I am the Editor & Publisher of The Trumplandia Report

Leave a Reply