Rohit, another thought-provoking post. It got me thinking deeply into something I've only recently begun to understand: power, and its role in social systems. In our framework we've been working to sharpen this—what happens when gate control can block the discipline channel.
You're exactly right that better preference elicitation can improve match quality, and that once "everyone has an agent," the problem shifts from individual optimization to coordination. Agents don't eliminate institutions; they raise the stakes on mechanism design. Prices can still do indispensable work by compressing information and throttling spammy interaction.
Here's the lens we'd add: who controls the gates.
By "gates" we mean control over access and outcomes: onboarding and identity, ranking and visibility, matching rules, message throttles, reputation, payments, dispute resolution, and—quietly but decisively—switching costs and portability. When those gates are contestable, authority tends to stay connected to contribution. When they're lockable, power can persist even as performance decays.
That's the basic mechanism behind what we call power-reinforced persistence: staying in charge because you control chokepoints, not because you keep delivering value. The signature is a lag. Service quality can slip while apparent stability holds, because the feedback channel gets partially blocked—critics get downranked, competitors get throttled, switching gets harder, and the "outside view" is curated.
Until a gate fails.
This is the "hidden ledger" problem—deferred maintenance in an organization. The lobby stays clean while the pipes corrode. You opened with Whitney Wolfe Herd's vision of AI-to-AI dating; dating apps illustrate the pattern well. Matching algorithms, boost features, visibility controls, and non-portable conversation history can create sticky gates. Platform incentives can drift toward engagement over match quality, and switching costs tend to be high when alternatives look similar. The lag can hold—until a bypass appears (new interface, interoperability, a credibility shock) and the gap between "how secure it looked" and "how well it was working" gets reconciled fast.
Why this matters for the agentic commons: agent adoption redraws the gate map. If agents mainly route through a few platforms, those platforms don't just coordinate matching; they become infrastructure that can decide who gets seen, who gets served, and who can realistically exit. You can have a functional price mechanism and still get concentrated, insulated gate control—because prices coordinate transactions, while gates determine participation and contestability.
So the long-run question isn't only "will agents create congestion?" It's: where do the switches end up, and can they be contested? If the key gates are portable and auditable (real exit, interoperability, credible recourse), agentization can democratize. If they're sticky and opaque, it tends to concentrate.
Short version: Agents change coordination, but power changes institutions. Watch the gates.
Mike VR
PS. I got an even deep understanding of power by writing this.
A brilliant but ill-posed problem. The starting assumption is this a matching between two fixed terms, where optimum matching will result in optimum relationship. The reality is that the terms evolve together when interacting, i.e. there is a relationship. This is tautological, but there is no "relationship" if the two starting terms don't co-evolve. Or more to the point, if this co-evolution is predictable from the starting terms, there is no human relationship. There is no human at all in fact.
A more challenging but interesting problem is to view the relationship as prior to the terms.
That's the good question. Not sure much has been done on this as this is basically a complete change in the modern science paradigm, which started with Galileo/Descartes (or Newton/Bacon) by isolating the studied object from any complex 3-body interaction (e.g., ball falling interacting with earth only --> gravity laws, human-body interacting with medication only and without human mind --> double blind testing and our asses saved from many diseases).
So here you have a relationship and two terms as three interacting bodies (and that would still be an oversimplification as humans have families, etc.). The space to optimise in is much larger, chaotic... Would be interesting though. A starting point would be to have a metric to optimise on the relationship also, in addition to the two metrics for each of the two terms. Not sure. What do you think?
Yes. Similar principles would apply, in the sense that to do large numbers of such preference elicitation within politics would require some sensible ways to reduce congestion.
“If everyone has an AI agent, it’s almost like nobody does” is a killer line. Thanks for a very clarifying piece.
Glad you liked it!
Watch the Gates
Rohit, another thought-provoking post. It got me thinking deeply into something I've only recently begun to understand: power, and its role in social systems. In our framework we've been working to sharpen this—what happens when gate control can block the discipline channel.
You're exactly right that better preference elicitation can improve match quality, and that once "everyone has an agent," the problem shifts from individual optimization to coordination. Agents don't eliminate institutions; they raise the stakes on mechanism design. Prices can still do indispensable work by compressing information and throttling spammy interaction.
Here's the lens we'd add: who controls the gates.
By "gates" we mean control over access and outcomes: onboarding and identity, ranking and visibility, matching rules, message throttles, reputation, payments, dispute resolution, and—quietly but decisively—switching costs and portability. When those gates are contestable, authority tends to stay connected to contribution. When they're lockable, power can persist even as performance decays.
That's the basic mechanism behind what we call power-reinforced persistence: staying in charge because you control chokepoints, not because you keep delivering value. The signature is a lag. Service quality can slip while apparent stability holds, because the feedback channel gets partially blocked—critics get downranked, competitors get throttled, switching gets harder, and the "outside view" is curated.
Until a gate fails.
This is the "hidden ledger" problem—deferred maintenance in an organization. The lobby stays clean while the pipes corrode. You opened with Whitney Wolfe Herd's vision of AI-to-AI dating; dating apps illustrate the pattern well. Matching algorithms, boost features, visibility controls, and non-portable conversation history can create sticky gates. Platform incentives can drift toward engagement over match quality, and switching costs tend to be high when alternatives look similar. The lag can hold—until a bypass appears (new interface, interoperability, a credibility shock) and the gap between "how secure it looked" and "how well it was working" gets reconciled fast.
Why this matters for the agentic commons: agent adoption redraws the gate map. If agents mainly route through a few platforms, those platforms don't just coordinate matching; they become infrastructure that can decide who gets seen, who gets served, and who can realistically exit. You can have a functional price mechanism and still get concentrated, insulated gate control—because prices coordinate transactions, while gates determine participation and contestability.
So the long-run question isn't only "will agents create congestion?" It's: where do the switches end up, and can they be contested? If the key gates are portable and auditable (real exit, interoperability, credible recourse), agentization can democratize. If they're sticky and opaque, it tends to concentrate.
Short version: Agents change coordination, but power changes institutions. Watch the gates.
Mike VR
PS. I got an even deep understanding of power by writing this.
A brilliant but ill-posed problem. The starting assumption is this a matching between two fixed terms, where optimum matching will result in optimum relationship. The reality is that the terms evolve together when interacting, i.e. there is a relationship. This is tautological, but there is no "relationship" if the two starting terms don't co-evolve. Or more to the point, if this co-evolution is predictable from the starting terms, there is no human relationship. There is no human at all in fact.
A more challenging but interesting problem is to view the relationship as prior to the terms.
Thanks! How might you test that?
That's the good question. Not sure much has been done on this as this is basically a complete change in the modern science paradigm, which started with Galileo/Descartes (or Newton/Bacon) by isolating the studied object from any complex 3-body interaction (e.g., ball falling interacting with earth only --> gravity laws, human-body interacting with medication only and without human mind --> double blind testing and our asses saved from many diseases).
So here you have a relationship and two terms as three interacting bodies (and that would still be an oversimplification as humans have families, etc.). The space to optimise in is much larger, chaotic... Would be interesting though. A starting point would be to have a metric to optimise on the relationship also, in addition to the two metrics for each of the two terms. Not sure. What do you think?
Very interesting! I’m learning a lot about matching markets here.
Any way to do this in a political party? Trying to match desires of constituents to representatives.
Yes. Similar principles would apply, in the sense that to do large numbers of such preference elicitation within politics would require some sensible ways to reduce congestion.