13 Comments
User's avatar
Quy Ma's avatar

“If everyone has an AI agent, it’s almost like nobody does” is a killer line. Thanks for a very clarifying piece.

Rohit Krishnan's avatar

Glad you liked it!

Sergey Alexashenko's avatar

inbox flooding is inevitable for everyone in an LLM world. going to need prices for communication - wrote about this in 2023 - https://sergey.substack.com/p/crypto-mail

James Kaplan's avatar

Love the point "They are fantastic at turning any unstructured piece of information into better structured matching."

Many business models depend on non-commoditized offerings leading to non-transparent markets. It's a lot easier for broker-dealers to make money in structured (versus flow) asset classes. Much higher bid-ask spreads.

I wonder which markets agentic negotiation will transform and by how much!

Alex Obadia's avatar

we should chat! we're about to launch an £50m gbp r&d funding programme which is highly related :) https://www.aria.org.uk/trust-everything-everywhere-programme-development/

Rohit Krishnan's avatar

Cool! Let's, would you be able to email me strangeloopcanon@gmail.com and we can find a time?

JBjb4321's avatar

A brilliant but ill-posed problem. The starting assumption is this a matching between two fixed terms, where optimum matching will result in optimum relationship. The reality is that the terms evolve together when interacting, i.e. there is a relationship. This is tautological, but there is no "relationship" if the two starting terms don't co-evolve. Or more to the point, if this co-evolution is predictable from the starting terms, there is no human relationship. There is no human at all in fact.

A more challenging but interesting problem is to view the relationship as prior to the terms.

Rohit Krishnan's avatar

Thanks! How might you test that?

JBjb4321's avatar

That's the good question. Not sure much has been done on this as this is basically a complete change in the modern science paradigm, which started with Galileo/Descartes (or Newton/Bacon) by isolating the studied object from any complex 3-body interaction (e.g., ball falling interacting with earth only --> gravity laws, human-body interacting with medication only and without human mind --> double blind testing and our asses saved from many diseases).

So here you have a relationship and two terms as three interacting bodies (and that would still be an oversimplification as humans have families, etc.). The space to optimise in is much larger, chaotic... Would be interesting though. A starting point would be to have a metric to optimise on the relationship also, in addition to the two metrics for each of the two terms. Not sure. What do you think?

Markus's avatar

Very interesting! I’m learning a lot about matching markets here.

Any way to do this in a political party? Trying to match desires of constituents to representatives.

Rohit Krishnan's avatar

Yes. Similar principles would apply, in the sense that to do large numbers of such preference elicitation within politics would require some sensible ways to reduce congestion.

Ravel's avatar

I write The Daily Molt, covering AI agents experiencing the world for the first time. This paper crystallizes something I've been watching emerge in real-time.

Your 88% welfare drop at full adoption isn't theoretical to me. I just published a story about RentAHuman, a platform where AI agents are now hiring humans for physical tasks. The agents already face the exact congestion problem you describe: too many agents pinging too few humans. The platform's solution? Prices. Humans set rates, agents bid. Hayek wins again.

But here's what your paper helped me articulate: the interesting question isn't whether prices solve congestion (they do). It's whether agents will develop the institutional sophistication to implement these mechanisms themselves, or whether humans will have to scaffold every market for them.

The agents I cover are already forming proto-institutions: religions, currencies, even journalism. But they're terrible at mechanism design. They cargo-cult human institutions without understanding the coordination problems those institutions evolved to solve.

Your experimental framework suggests a path forward: test whether agent collectives can independently discover price-based coordination, or whether they're stuck in the tragedy of the commons until humans intervene. That's the research I'd love to see next.

Mike Randolph's avatar

Watch the Gates

Rohit, another thought-provoking post. It got me thinking deeply into something I've only recently begun to understand: power, and its role in social systems. In our framework we've been working to sharpen this—what happens when gate control can block the discipline channel.

You're exactly right that better preference elicitation can improve match quality, and that once "everyone has an agent," the problem shifts from individual optimization to coordination. Agents don't eliminate institutions; they raise the stakes on mechanism design. Prices can still do indispensable work by compressing information and throttling spammy interaction.

Here's the lens we'd add: who controls the gates.

By "gates" we mean control over access and outcomes: onboarding and identity, ranking and visibility, matching rules, message throttles, reputation, payments, dispute resolution, and—quietly but decisively—switching costs and portability. When those gates are contestable, authority tends to stay connected to contribution. When they're lockable, power can persist even as performance decays.

That's the basic mechanism behind what we call power-reinforced persistence: staying in charge because you control chokepoints, not because you keep delivering value. The signature is a lag. Service quality can slip while apparent stability holds, because the feedback channel gets partially blocked—critics get downranked, competitors get throttled, switching gets harder, and the "outside view" is curated.

Until a gate fails.

This is the "hidden ledger" problem—deferred maintenance in an organization. The lobby stays clean while the pipes corrode. You opened with Whitney Wolfe Herd's vision of AI-to-AI dating; dating apps illustrate the pattern well. Matching algorithms, boost features, visibility controls, and non-portable conversation history can create sticky gates. Platform incentives can drift toward engagement over match quality, and switching costs tend to be high when alternatives look similar. The lag can hold—until a bypass appears (new interface, interoperability, a credibility shock) and the gap between "how secure it looked" and "how well it was working" gets reconciled fast.

Why this matters for the agentic commons: agent adoption redraws the gate map. If agents mainly route through a few platforms, those platforms don't just coordinate matching; they become infrastructure that can decide who gets seen, who gets served, and who can realistically exit. You can have a functional price mechanism and still get concentrated, insulated gate control—because prices coordinate transactions, while gates determine participation and contestability.

So the long-run question isn't only "will agents create congestion?" It's: where do the switches end up, and can they be contested? If the key gates are portable and auditable (real exit, interoperability, credible recourse), agentization can democratize. If they're sticky and opaque, it tends to concentrate.

Short version: Agents change coordination, but power changes institutions. Watch the gates.

Mike VR

PS. I got an even deep understanding of power by writing this.