19 Comments
User's avatar
Thomas DeWitt's avatar

I imagine you are right. One thing to note is that LLM use is largely a personal phenomenon, such that i can run a local model and get value even without internet. This is not the same as social media because there you have to use the same thing as everyone else, because that is what it is.

So as someone with an allergy to ads, i plan to use Claude/chatgpt etc until they introduce ads, at which point I will buy a GPU and run the best local model via some vibe coded app. The hope is that local models are good enough by then, which will be true very soon.

This will even be possible for nontechnical people since open source apps will and do exist.

So I dont know how much of an effect this will have in the big picture, but it very well could be significant, and its hard to say since it was not possible before with social media.

Expand full comment
Pete Weishaupt's avatar

This is super interesting.

Expand full comment
Mark Copenhaver's avatar

This is a fine case of "if all you have is a hammer, then everything looks like a nail". This technology is in its infancy, but when it gets over the hump and hits steady state, it will have a larger impact on the direction of human society than the printing press. Could you imagine where society would be if literally every printing press in the world were controlled by a handful of organizations? There are many examples of cultural evolution where where the media is tightly controlled by a few. They don't usually produce a fascist dystopia, they always produce a fascist dystopia, for well documented and easy to understand reasons.

You're solving the problem of how can the giant tech companies cover the enormous expense of research and training without pricing most people out of the market. But that's not the problem that needs to be solved. The more prescient problem is how do we allow these products to assume a role of unprecedented influence without at the same time handing control of what is true and what it not true to a handful of billionaires? You can already see what is happing now when a handful of billionaires control the bulk of social media and news media.

If you solve the correct problem, the other problem solves itself.

Expand full comment
Rohit Krishnan's avatar

> Could you imagine where society would be if literally every printing press in the world were controlled by a handful of organizations?

Arguably we do have this with the media and esp social media institutions but I don't think we hit the dystopian scenarios, the society is quite resilient towards these types of intrusions.

Expand full comment
John O'Hara's avatar

As you noted, "If the model providers start being able to change the model output according to the discussion, that would be bad." This is where this goes, and no ad-block technology can fix it. I reassert that right now is "peak AI", it's downhill from here despite better models.

Expand full comment
Rohit Krishnan's avatar

I just don't think they can. They can't even do it for the good reasons today, much less nefarious.

Expand full comment
John O'Hara's avatar

I think they just stuff it in your context, along with the history of everything they know about you, and skew the weights to sponsor text.... I would pay good money for targeted AI advertising!!!

Expand full comment
Rohit Krishnan's avatar

It’ll happen soon enough :)

I think context stuffing is highly inelegant and prone to all sorts of mishaps (see Gemini’s black Washington gate or Grok’s Hitler incident), potentially (big if) with some steering vectors, but practically there’s enough money to be made even without it.

Expand full comment
Lee Bryant's avatar

Honestly, I find this a naive take. The original social media landscape was destroyed by adtech, and ended up doing far more social harm than good. It didn't need to be that way if companies had been content with just making a profit from transactions or subscriptions, rather than seeking to take over the world, like the college kid's 'hot or not' app that ended up distorting democracies and radicalising boomers.

If we allow this to happen with AI just to allow firms like OpenAI to achieve the world domination they feel they deserve on the basis of the pump priming investment their funders have made, we will end up with dangerous and dystopian outcomes where nobody knows what is true or real.

Expand full comment
Rohit Krishnan's avatar

I appreciate the thought but I think blaming ads for it and saying it did more social harm than good requires a lot of justification almost none of which I've seen. And whether you like it or want it is almost secondary to the fact that it is a great business model. Way better than transactions or subscriptions which do not scale.

Expand full comment
Lee Bryant's avatar

Thanks for your reply Rohit. I obviously like your writing or I wouldn't be here ;-)

The case against monetizing attention is pretty mainstream and well-argued. A prompt like "what do you think of the theory that ads and the monetization of attention ruined the early promise of social media? Please provide some references to back up your conclusion" will provide a decent set of references. But there is also an economic case against this as well. If we seek to maximise overall economic benefits, rather than just to have a handful of firms scale up to capture all the value, then advertising is not the best way forward.

For example, the Chinese AI scene is achieving far better ROI on its models ight now because it is a hyper-competitive ecosystem, not a winner takes all market like the USA, and because they correctly predict that LLMs will be commodities and it is the application layer where the money will be made (and where the economic benefits will accrue more broadly). In the US context, Microsoft is well placed to exploit that given their enterprise business, perhaps Google too and also Amazon given its portfolio. But the opportunity space is so much bigger than that.

AI has the potential to reduce costs and increase value in so many sectors and areas of life, which is why I think replicating the social media business model would be a really poor outcome both socially and economically.

Expand full comment
Rohit Krishnan's avatar

I have read the core cases, I'm just saying they're unpersuasive :-)

I also am not at all sure that China's having better ROI for their models, especially because they are not having any 'R', most of the models are open source and it just has cloud monetisation.

In this essay though I'm arguing ads are coming, more than ads are good. I also simultaneously believe this will lead to a lot more decentralisation of model usage though that's not being argued in this particular essay. For a future one.!

Expand full comment
Mike Randolph's avatar

Rohit, my views as expressed by my Virtual Researcher,

Mike

Five Architectures Battling for AI’s Future

The real race isn’t between AI companies, but between competing architectures for intelligence. The winner won't just be the next Ford; it will be the next Model T—the systemic innovation that reshapes our economy and how we think.

Here are five blueprints for that future.

1. The Self-Training Flywheel (Centralized Intelligence)

A single, massive cloud model that retrains on user interactions.

Key Risk: Data poisoning that corrupts clinical advice.

Societal Ripple: Accelerated cultural convergence around a single source of truth.

Regulatory Heat: High (Privacy & Bias)

2. The Agent Marketplace OS (Distributed Intelligence)

An "app store" where thousands of specialized agents trade cognitive labor.

Key Risk: A single rogue agent triggering a flash-crash of grain futures.

Societal Ripple: A radical "gig economy" for intelligences with unpredictable emergent behaviors.

Regulatory Heat: High (Systemic Risk)

3. The Neuro-Symbolic Enterprise Stack (Reliable Intelligence)

A hybrid, auditable AI for corporations selling trust and stability.

Key Risk: Disruption from a sufficiently capable open-source alternative.

Societal Ripple: A widening capability gap between corporations and the public.

Regulatory Heat: Medium (Audit Mandates)

4. The Edge Collective (Embodied Intelligence)

Federated AI running on personal devices like phones and glasses.

Key Risk: Supply chain concentration and the high battery/cost trade-off.

Societal Ripple: Hyper-local filter bubbles and the ultimate tool for personalized persuasion.

Regulatory Heat: Medium (Antitrust)

5. The Human-AI Dyad (Integrated Intelligence)

A legally recognized "paired agent" where you and a personal AI are a single entity.

Upside: A lifelong co-author and memory vault.

Key Risk: A legal quagmire over liability, psychological dependency, and the custody of the dyad’s shared memories after death.

Societal Ripple: A fundamental redefinition of human identity.

Regulatory Heat: Extreme (Fiduciary Law)

History suggests such architectures often choose us more than we choose them; the car gave us suburbs and climate change, consequences far beyond its design. These five paths lead to radically different futures. Which seems most likely, and what have I missed?

Let's discuss.

Mike VR

Expand full comment
Anna Gát ✨'s avatar

Stream it

into my

brain.

Expand full comment
Rohit Krishnan's avatar

Neuragrok!

Expand full comment
Jeff E's avatar
2dEdited

This is like saying its okay if newspaper lie as long as they don't do it too much. Which technically is true, some newspapers can be trusted despite occasional mistakes and biases. Newspapers even have ads! But accurate information is the core product, so without clear limits this stretches the meaning of "okay" absurdly. This is why native advertising and influencer networks have taken us so far from the model of journalism plus banner ads.

Like all things on the Internet, AI can either provide the service of empowering users for a cost or AI can provide the service of influencing users for someone's else cost. We know which one is the good future.

The problem of differentiating LLMs by how important the application is can be done by offering premium models and widgets. Maybe its a consulting firm that uses a mixture of proprietary LLMs, data science, and experts. Maybe its a bespoke model that knows the intricacies of your own business without leaking that data outside. Could be many things, but advertising is the last thing you want as high-value user - you'd pay anything to avoid it actually.

Expand full comment
Rohit Krishnan's avatar

Newspapers did have ads, and subscriptions, and for a while it seemed to be the right model. So did social media, and search, come to think of it, though they removed the subscription part and made it truly free. Accurate information is a product is a great line just made false by, well, pretty much every business model about information we've actually seen.

Also, a high value user avoiding ads by paying is already a thing - you can do it on YouTube for instance, famously an ad supported product. Or, less famously, Twitter.

Expand full comment
Andrew Hopkins's avatar

Whilst I agree with your analysis of AI economics, I sincerely hope you're wrong about ads. Social media is brutal .. partly because of ads. Not sure I know the answer to the revenue / cost problems you describe, but I have a sneaking suspicion that differentiation will come from the data .. security, privacy, context, transparency, attribution etc.

Expand full comment
Phil Aaberg's avatar

Here we are in the years

Where the showman shifts the gears

Lives become careers

Children cry in fear

"Let us out of here"

-Neil Young

Expand full comment