8 Comments
User's avatar
Lou D’Alessandro's avatar

I enjoyed your essay. I agree with many of your comments and observations you make especially “societally destabilising”. My perspective is from a person of the age that when I started engineering school we were the last group of students to learn how to use a slide rule and the first students to be allowed to use calculators, TI SR-10 and HP-35 and we wrote programs on computer cards. Anyway I’m not really retired, do small IT project management gigs after decades of leading project teams. I’m now re-energized and fascinated by AI. I’m building a very practical advisory app for a very narrow use case, total opportunity about 9 million users. It’s stimulating and fun. I enjoy the learning. But what I’m learning and concerned about are (1) impact on society and (2) from millennials to gen-z and beyond. They are not prepared and as a dad of both millennials and z’s I need to help them and their generations be aware and prepare for the change.

Keep up the good work. Thanks for sharing. By the way this is the first time I ever commented on a blog.

Eli Finer's avatar

This was one of the best extrapolations of what the post AI future may look and feel like and actually seems prophetic enough to base some long term decisions on. The details make sense, but more than that, the vibe makes sense. Everything is different but everything is somehow still the same. It's both unrecognizable in the details and in some fundamental ways continuous.

The best part is that we have flying cars but we aren't flying to the moon on a daily basis because some constraints were resolved but others remained and the entire system stabilized around a new equilibrium.

My personal prediction is that the kind of cowboy analytics that the protagonist does will be rare and the economy will explode with meaningless paper pushing because of bureaucratic and governmental mandates. They will arise as a response to AI anxiety but they will remain because they'll become Chesterton's fences nobody will have the guts to knock down. A 70% bullshit job economy will turn into 98% bullshit job economy.

Rohit Krishnan's avatar

> This was one of the best extrapolations of what the post AI future may look and feel like and actually seems prophetic enough to base some long term decisions on. The details make sense, but more than that, the vibe makes sense. Everything is different but everything is somehow still the same. It's both unrecognizable in the details and in some fundamental ways continuous.

Thank you! That's exactly what I could've hoped for!!

Julián's avatar

The "every job is the last job" line hits hardest. Knowledge transfer used to scale linearly. You would teach the next person, who would then teach the next person, and so on. Once the machine learns from you, though, the human chain goes extinct. Being the first mover for the AI to learn from becomes a very specific and weird kind of job.

Mike Randolph's avatar

Rohit — I’ve been commenting here since the month after ChatGPT launched. My first observation was that memory is the biggest problem with LLMs. Three years later I still think so.

I loved this piece. It took me longer to read than anything I’ve done today, and I’ve been working. You and I have different worldviews — I’m a retired process chemical and IT systems engineer in my early eighties, and I’ve spent thirteen months building a diagnostic framework inside Claude, ChatGPT, and Gemini on three twenty-dollar subscriptions. No API, no programming. Every session resets. Nothing the AI learns persists unless I maintain it by hand in text files loaded into the next session. The capability is real. The memory is human.

Your narrator says “what is done once got done for all time.” That’s the assumption I’d push on. I don’t know where LLMs or AGI are going. Nobody does. But I’m firmly convinced we’re heading into a very different world, and I firmly believe humans will enjoy it. The reason is in that memory gap — the things that matter most still require a human in the loop maintaining them. That’s not a limitation. It’s the architecture.

I recently pointed my framework at Anthropic and OpenAI as a test case. Same classification, very different architectures — the differences show up in boundary control, cost-bearing, and whether their governance actually corrects when something breaks. If you’re interested I’ll send it. It’s short, no jargon, and everything carries a stated falsifier.

— Mike Randolph

Rajesh Achanta's avatar

Rohit, this is the best speculative fiction I've read in months. The vibe is right: everything different, everything somehow still the same.

Two details landed hard for me and validated what I'm musing about.

First, "a thin wrapper between my agents and those that want my efforts." I'm publishing a short piece on Monday (Future Tense) that circles the same intuition — work starts to look like Hollywood, agents talking to agents, humans only showing up once terms are right. You've described what it feels like from the inside.

Second, "physical presence" as the last costly signal. When everything can be faked, showing up becomes a scarce commodity. That's the flip side of my piece teed up for Sat on what I'm calling the last-meter economy — the physical world remains undefeated, and new work appears exactly where automation stalls or loops.

Your narrator's exhaustion from "monitoring the drones" is also eerily prophetic. The new jobs won't be easier. They're different kinds of hard.

Also nice to see you back here.

Enon's avatar

Good read, but the economic assumptions are all wishful thinking. If robots work well and AI has good world models and much of the current early-stage AI research pans out, you get super-concentrated wealth, wage labor becomes far too little to support the other 99.997% of the population, there is no alternative wage employment, small capital does not get returns sufficient for subsistence, so demand collapses. The alternative being planned is universal basic income via programmable digital currencies, which is a worse dystopia than anything in science fiction; the other alternative is to limit the concentration of beneficial ownership of productive capital such as factory machinery, particularly robots, ensuring that the profits of capital are sufficiently broadly enjoyed so that false labor "efficiency" does not destroy aggregate demand. The latter is the justification for my "wide-ownership workshop industrial economy" plan for economic organization of production as markets of machinery-use swaps as the backing for the medium of exchange within local or regional networks of "job-shops" which collectively have the ability to make new machinery ("seed factory") new machinery that can be bought with loans payable in the machine-use currency. (Invest or dystopia? Choose.)

Your future memoir assumes in many ways that economists _know things_, that they aren't just fluffers for managerialism and all it implies, court flatterers to be rusticated come the revolution. For example, the idea that markets are the source of truth, divine oracles whose infinite wisdom will never come close to being equalled even by AIs that can do better than nearly all humans at nearly everything. It's just bizzare market-cultishness that could be cured by exposure to actual markets, in which prices have long been overwhelmingly the result of bot wars. Markets are just about the first thing replaced with populations of AI agents, which one hopes will be an improvement over the current dumb bots, but my point is that markets can now be accurately simulated by a single entity with enough computing power to model the human and bot participants in not just a market but all the major markets, with individual modeling and digital accounting doubles of big players, and statistical modelling of groups of smaller traders with common trading styles, with modelling of a small population of typical exemplars of such groups that can be distinguished by trading behavior. With a few tens of thousands of high-res agents modeling human decisions many times faster than the actual markets, you can price things you care about better than the market itself. (Early adopters could make money on the market to some extent, but the principle long-term effect would be to make trading per se unprofitable, markets would return to being real buyers and sellers rather than speculators.

There's also some stuff about IP in your future memoir, how difficult it will be to exclude others from stealing good ideas, and how even the whiff of a new idea will get well funded. IP is already entirely mythical for all but a very tiny fraction who can afford millions in lawyers' fees. There is such a hige backlog of good unused ideas from the past 100+ years, unused because they were patented or otherwise "protected" (from being used), then ever since the patent expired they have been unused because the can't be "protected". Thus no one wants to use almost any of the new ideas that have been conceived, ever. So anorher part of my plan of how to organize quasi-self-reproducing and -sufficient manufacturing economies provides a organizational mechanism for distributed, track-record-weighted valuation and compensation of IP, which rolls up all IP costs into a fee easily determined at the (automated) design stage before anything is manufactured. The IP fee is set to be a reasonable fraction of production costs, The IP valuations and compensations are converted to fractions of the available pool or stream of collected IP fees. The idea is to make it cheap and predictable cost to reuse IP, with zero marginal cost to copy information, but to ensure that producing such useful information is paid enough to ensure it is produced in sufficient quality to not retard or smother civilization.