Rohit, your friction analysis connects to patterns I've been exploring around system persistence and constraint satisfaction. What strikes me is how each technological "solution" displaces constraints rather than eliminating them.
Your AI coding example perfectly illustrates this - and I see it in my own work. AI has removed the friction of generating text, but now I spend enormous time reviewing LLM output for accuracy, coherence, and alignment with my actual thinking. The constraint moved from "time to write" to "capacity to validate and refine AI-generated content."
This pattern repeats across history. When the printing press removed friction from copying texts in the 1500s, scholars complained about "the confusing abundance of books" - suddenly they could access more information than they could meaningfully evaluate. Same displacement: easier creation, harder curation.
What's remarkable is how quickly this is changing. I retired 25 years ago and had basically forgotten how to type. But only in the last month have AI models become sophisticated enough to understand my cognitive load limitations and adapt accordingly. When I push back, these newest systems don't just generate faster - they actually recognize when I'm getting overwhelmed and adjust their communication style.
This suggests we might be moving beyond the historical pattern where new technologies create problems humans must solve through adaptation. Instead, we're developing systems that can adapt to individual human needs rather than forcing uniform interaction styles.
The critical questions remain: Are we confusing efficiency with effectiveness?
I’m worried about removing friction to super intelligent and hyper persuasive AI. Didn’t see your post before I wrote the conclusion to my piece today!
”If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law)."
For the stuff he works on. But the vast bulk of software is not like that -- much easier to skim and verify than to write, so there are real gains in how fast you can build and deliver value.
Gotta disagree with you about "you can't opt out" — there are a million dimensions of opting out; "opting out" is the very essence of executive function; and if as kyla says friction is information then it's worth more to the automated economy when you exert yourself for a no then when you slide choicelessly into a yes.
I look forward to speaking with you about all of this on the show, when we get there...I have this idea of a future that rewards distinctive persistent embeddings produced by self-directed learning that I just discussed with Alex Komoroske for his episode today, we mentioned this piece, and we didn't fully digest the ramifications of what happens when the system favors the "hipster" (or "Irish") move of "thinking otherwise" for maximal coverage of latent space and the cultivation of an adaptive reservoir. There's a phase transition underway and on the other side of it is a paradox I'd love to explore with you — when the frictionless convergence is toward everyone using *different* bespoke software, inhabiting *different* conceptual frameworks, increasingly reliant on an illegibly complex layer of mediation that makes mass customization under surveillance capitalism look like child's play and ups both the ease of translation between strangers and our total embeddedness in a shared but prismatic hyperlanguage. Surely this is an upper bound we never quite reach, but it's directly opposite the lower bound most people are kvetching about, so it seems like a fun provocation to explore together.
I think the argument there is like if you want to get a job you have to use the internet, or you need apps to get into many buildings. So "exit" becomes much harder. The flip side of course is that technology always moves, we can't do thumb prints to get into bank accounts either, so by itself the tech moving forward doesn't matter, though the fact that they feel trapped is instructive.
Yeah I suppose I just mean that we're treating in and out like it's one axis and it's more axes than we can count. Internet use doesn't necessarily mean giving all your data away to FAANG, and electricity might mean solar farm, and as the surface continues to grow, there are going to be more and more ways to remix all of these choices, to the extent we even recognize them as choices.
Little I think. VC gets a lot of flak but it’s just a funding model, albeit a memetically successful one. They try to find the thing that’s growing and fund that.
I suppose sometimes the question is not just that ‘they’ looked for friction to eliminate, but did that need to be eliminated in the first place. Create a demand, create a solution for a problem that exists only a little …?
Recently read Joanne westernberg write on the search for frictionless destroying curiosity:
“It’s not that user experience designers are trying to kill curiosity. They’re trying to help. But help, at scale, becomes architecture. And architecture shapes cognition. The interfaces we use become metaphors for how we think the world works. If your phone always gives you the answer, you stop asking better questions.”
I don’t think people want to experience friction so much as BE friction, because it’s evidence that we impact the world, that we are real. When we’re just being pulled along at the pace of the feed, we can feel like we might disappear :)
Another analogy might be exercise, weight lifting, working out: as a general rule people don’t want to lift heavy things, but on the other hand, that friction of strain is how you know your workout means something, is not just meaningless flailing
Great essay Rohit. I think friction also builds credibility. Doping in sports leads to the destruction of credibility. If I come across content that is a low effort copy-paste from ChatGPT (easy to tell) then the person sending me content instantly loses credibility. It is unlikely I will read anything else from them until I know they have done the work.
Maybe we need some sort of "proof-of-work" concept / social more as we integrate friction reducing AI into knowledge work..
Love the way you said: 'A small escape from engineered dependence to our tools, which are no longer our own.' Its a coincidence I wrote about the same theme triggered by Kawasaki's Crleo launch announcement. 'The more perfect the guidance, the less visible the trap.'
It's not the tech that corrupts, it's the money and power at stake for financial and intelligence agencies, and everything that has been drawn into orbiting them. Tech itself is idealistic and naive, even people who lived it forget just how optimistic and idealistic the techie subculture of the '80s and '90s was. Few know how evil the financial and covert power worlds have been, have become, though many have had encouners which drove many of the better ones away from the hive of scum and villany that is the world of personalities like Jobs, Thiel, Ellison, Gates, and their ilk. Normies know only PR propaganda, they think these villainous front-men are techies, they never hear about Wozniak, Englebart, Merkle, or Drexler, let alone all the geniuses that were shut out completely by the Cabal of rackets orbiting the intelligence-agency financial core.
For those who have had an easy path paved by consultancy, think-tanks, VCs or the other establishment rackets, it's easy to believe that their sucess comes from merit, that their patrons are good and righteous elites who made their money by providing value and fair competition. In reality, nothing could be further from the truth. All that Cabal has is stolen, none of their power is legitimate, everything they say is lies, they have usurped and disposessed the true elite. To benefit them is wrong, to serve them makes one an enemy of the rightful controllers of civilization, those whom they disinheirited and blighted.
I was reflecting the other day how the Internet of the last millennium was rooted in the notion of freedom, and flourished through co-operation and freedom to experiment. What has happened to it reminds me of the "Land Enclosures" in Britain in the 1600's and 1700's -- common, open things, turned in to property in an irreversible way.
Half of Britain is owned by 0.06% of the population.
The Internet is going the same way.
There was peak Google, peak Facebook, peak WhatsApp, peak Uber, and peak LinkedIn. Even peak Chrome with Manifest V3 killing useful adblockers. Microsoft and Apple are trying to kill Personal Computing. IBM is enclosing Linux. New time sharing mainframes a nice UX are coming for you. The good versions that lure us in, kill off the competition, and create a dependent ecosystem. We all know this.
It usually takes about 10 years before people forget anything else existed, and the competition is neutered, then the enshittification begins and there is no easy escape.
I think we're close to peak AI. Wait until you see how much an untainted powerful model will cost per month - maybe $2,000 or more per user. But they've got to wait until we forget how to think, or that it's even possible to work without AI.
Anti-trust should be used against this bunch -- we've reached the point where the concentration of money and resources have stopped being useful. Break them apart so competition re-ignites innovation.
Desire for power is the corrupting force that seized unguarded opportunities.
It takes enlightened vigilant civilisations to control and direct power into a positive steady state.
Anti-trust has been virtually a dead letter for over 20 years. State civil RICO is something that citizens can do without prosecutors. RICO makes it a crime and a tort for the caballers to own or control anything. Still, the problem is getting those installed by Cabal to fight the corruption, prosecutors in anti-trust cases, judges in both anti-trust and RICO. That is a problem for which effective solutions are difficult to safely articulate.
However, AI isn't going to be a Cabal monopoly. Efforts to create right-wing and reality-aligned AI are happening behind the scenes, open-weight frontier models are being used and built upon, the closed models are intentionally crippled in their usefulness by lack of simple abilities such as tool use and planning and by their being bullshit engines with no concept of truth. The data and processing power requirements have fallen to less than $100k to do cutting-edge work.
The managerial class is scared of actual intelligence, getting the right answers, seeing through deceptions - they're finished if that becomes common, so they *can't* develop or even use real AI, nor can they prevent its deveopment.
Rohit, your friction analysis connects to patterns I've been exploring around system persistence and constraint satisfaction. What strikes me is how each technological "solution" displaces constraints rather than eliminating them.
Your AI coding example perfectly illustrates this - and I see it in my own work. AI has removed the friction of generating text, but now I spend enormous time reviewing LLM output for accuracy, coherence, and alignment with my actual thinking. The constraint moved from "time to write" to "capacity to validate and refine AI-generated content."
This pattern repeats across history. When the printing press removed friction from copying texts in the 1500s, scholars complained about "the confusing abundance of books" - suddenly they could access more information than they could meaningfully evaluate. Same displacement: easier creation, harder curation.
What's remarkable is how quickly this is changing. I retired 25 years ago and had basically forgotten how to type. But only in the last month have AI models become sophisticated enough to understand my cognitive load limitations and adapt accordingly. When I push back, these newest systems don't just generate faster - they actually recognize when I'm getting overwhelmed and adjust their communication style.
This suggests we might be moving beyond the historical pattern where new technologies create problems humans must solve through adaptation. Instead, we're developing systems that can adapt to individual human needs rather than forcing uniform interaction styles.
The critical questions remain: Are we confusing efficiency with effectiveness?
I’m worried about removing friction to super intelligent and hyper persuasive AI. Didn’t see your post before I wrote the conclusion to my piece today!
Yes, that's almost a perfect case in point!
And no sweat, there's always tomorrow :-)
Beautiful, thoughtful essay.
On Karpathy's:
”If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law)."
For the stuff he works on. But the vast bulk of software is not like that -- much easier to skim and verify than to write, so there are real gains in how fast you can build and deliver value.
Thank you! I fully agree that it will be massively beneficial, just that the constraints shift.
Gotta disagree with you about "you can't opt out" — there are a million dimensions of opting out; "opting out" is the very essence of executive function; and if as kyla says friction is information then it's worth more to the automated economy when you exert yourself for a no then when you slide choicelessly into a yes.
I look forward to speaking with you about all of this on the show, when we get there...I have this idea of a future that rewards distinctive persistent embeddings produced by self-directed learning that I just discussed with Alex Komoroske for his episode today, we mentioned this piece, and we didn't fully digest the ramifications of what happens when the system favors the "hipster" (or "Irish") move of "thinking otherwise" for maximal coverage of latent space and the cultivation of an adaptive reservoir. There's a phase transition underway and on the other side of it is a paradox I'd love to explore with you — when the frictionless convergence is toward everyone using *different* bespoke software, inhabiting *different* conceptual frameworks, increasingly reliant on an illegibly complex layer of mediation that makes mass customization under surveillance capitalism look like child's play and ups both the ease of translation between strangers and our total embeddedness in a shared but prismatic hyperlanguage. Surely this is an upper bound we never quite reach, but it's directly opposite the lower bound most people are kvetching about, so it seems like a fun provocation to explore together.
I think the argument there is like if you want to get a job you have to use the internet, or you need apps to get into many buildings. So "exit" becomes much harder. The flip side of course is that technology always moves, we can't do thumb prints to get into bank accounts either, so by itself the tech moving forward doesn't matter, though the fact that they feel trapped is instructive.
Yeah I suppose I just mean that we're treating in and out like it's one axis and it's more axes than we can count. Internet use doesn't necessarily mean giving all your data away to FAANG, and electricity might mean solar farm, and as the surface continues to grow, there are going to be more and more ways to remix all of these choices, to the extent we even recognize them as choices.
Loved this and have been thinking about the value of friction a lot, especially when it comes to social.
Also, I wonder how much of the “problem” with tech would be addressed by decoupling tech from VC. Thoughts?
Little I think. VC gets a lot of flak but it’s just a funding model, albeit a memetically successful one. They try to find the thing that’s growing and fund that.
Engaging read , thanks.
I suppose sometimes the question is not just that ‘they’ looked for friction to eliminate, but did that need to be eliminated in the first place. Create a demand, create a solution for a problem that exists only a little …?
Recently read Joanne westernberg write on the search for frictionless destroying curiosity:
“It’s not that user experience designers are trying to kill curiosity. They’re trying to help. But help, at scale, becomes architecture. And architecture shapes cognition. The interfaces we use become metaphors for how we think the world works. If your phone always gives you the answer, you stop asking better questions.”
https://www.joanwestenberg.com/how-convenience-kills-curiosity/
On education and AI , I wrote on it a few days back , with some explorations that might have links to interest you.
https://open.substack.com/pub/abhishakey/p/studies-with-ai-slaps-with-humans
I don’t think people want to experience friction so much as BE friction, because it’s evidence that we impact the world, that we are real. When we’re just being pulled along at the pace of the feed, we can feel like we might disappear :)
Another analogy might be exercise, weight lifting, working out: as a general rule people don’t want to lift heavy things, but on the other hand, that friction of strain is how you know your workout means something, is not just meaningless flailing
That’s a lovely way to think about it. This gets impacted with things like Ozempic, which is also why you get people angry at it
Great essay Rohit. I think friction also builds credibility. Doping in sports leads to the destruction of credibility. If I come across content that is a low effort copy-paste from ChatGPT (easy to tell) then the person sending me content instantly loses credibility. It is unlikely I will read anything else from them until I know they have done the work.
Maybe we need some sort of "proof-of-work" concept / social more as we integrate friction reducing AI into knowledge work..
Love the way you said: 'A small escape from engineered dependence to our tools, which are no longer our own.' Its a coincidence I wrote about the same theme triggered by Kawasaki's Crleo launch announcement. 'The more perfect the guidance, the less visible the trap.'
https://rajeshachanta.substack.com/p/catch-225-4-parables-and-a-paradox
It's not the tech that corrupts, it's the money and power at stake for financial and intelligence agencies, and everything that has been drawn into orbiting them. Tech itself is idealistic and naive, even people who lived it forget just how optimistic and idealistic the techie subculture of the '80s and '90s was. Few know how evil the financial and covert power worlds have been, have become, though many have had encouners which drove many of the better ones away from the hive of scum and villany that is the world of personalities like Jobs, Thiel, Ellison, Gates, and their ilk. Normies know only PR propaganda, they think these villainous front-men are techies, they never hear about Wozniak, Englebart, Merkle, or Drexler, let alone all the geniuses that were shut out completely by the Cabal of rackets orbiting the intelligence-agency financial core.
For those who have had an easy path paved by consultancy, think-tanks, VCs or the other establishment rackets, it's easy to believe that their sucess comes from merit, that their patrons are good and righteous elites who made their money by providing value and fair competition. In reality, nothing could be further from the truth. All that Cabal has is stolen, none of their power is legitimate, everything they say is lies, they have usurped and disposessed the true elite. To benefit them is wrong, to serve them makes one an enemy of the rightful controllers of civilization, those whom they disinheirited and blighted.
Dark, but not wrong.
I was reflecting the other day how the Internet of the last millennium was rooted in the notion of freedom, and flourished through co-operation and freedom to experiment. What has happened to it reminds me of the "Land Enclosures" in Britain in the 1600's and 1700's -- common, open things, turned in to property in an irreversible way.
Half of Britain is owned by 0.06% of the population.
The Internet is going the same way.
There was peak Google, peak Facebook, peak WhatsApp, peak Uber, and peak LinkedIn. Even peak Chrome with Manifest V3 killing useful adblockers. Microsoft and Apple are trying to kill Personal Computing. IBM is enclosing Linux. New time sharing mainframes a nice UX are coming for you. The good versions that lure us in, kill off the competition, and create a dependent ecosystem. We all know this.
It usually takes about 10 years before people forget anything else existed, and the competition is neutered, then the enshittification begins and there is no easy escape.
I think we're close to peak AI. Wait until you see how much an untainted powerful model will cost per month - maybe $2,000 or more per user. But they've got to wait until we forget how to think, or that it's even possible to work without AI.
Anti-trust should be used against this bunch -- we've reached the point where the concentration of money and resources have stopped being useful. Break them apart so competition re-ignites innovation.
Desire for power is the corrupting force that seized unguarded opportunities.
It takes enlightened vigilant civilisations to control and direct power into a positive steady state.
Anti-trust has been virtually a dead letter for over 20 years. State civil RICO is something that citizens can do without prosecutors. RICO makes it a crime and a tort for the caballers to own or control anything. Still, the problem is getting those installed by Cabal to fight the corruption, prosecutors in anti-trust cases, judges in both anti-trust and RICO. That is a problem for which effective solutions are difficult to safely articulate.
However, AI isn't going to be a Cabal monopoly. Efforts to create right-wing and reality-aligned AI are happening behind the scenes, open-weight frontier models are being used and built upon, the closed models are intentionally crippled in their usefulness by lack of simple abilities such as tool use and planning and by their being bullshit engines with no concept of truth. The data and processing power requirements have fallen to less than $100k to do cutting-edge work.
The managerial class is scared of actual intelligence, getting the right answers, seeing through deceptions - they're finished if that becomes common, so they *can't* develop or even use real AI, nor can they prevent its deveopment.