In my old alma mater, there was an internecine fight between the strategy folks and the organisation folks. Strategy folks get asked questions that seemed sexy, like should we enter China, or should we start this new unit, or should we buy this company. Organisation folks get asked questions like how should we structure our company so that we can perform better.
The weighty questions that the Strategy folks answered usually required a ton of data analysis. What products did we have, what sales did we have, where all did we sell what, how has this changed over time, with the hope being that through enough hard work staring and recutting the data you could get to an answer on what needs to be done for the company.
And as chunks of tasks that could only be done by arbitrarily talented humans get subsumed by things that can be done by a chatbox, the threads are likely to unravel.
My first project at McKinsey was to help a large airport figure out its parking strategy. Sounds boring, but turned out parking is how airports make a large chunk of their revenue, and figuring out how you could incentivise passengers to ride and park at the airport (this was before Uber).
To do this, and it took three months, it required gathering large numbers of excel sheets all from slightly different sources, to collate passenger figures, car usage figures, car usage surveys, various revenue streams and how they changed, pricing changes, macro data, historic travel patterns, and all benchmarked against other airports to see what we could expect if we were, you know, really good.
Each one of these required multiple rounds of data collection, then cleaning it to be in a consistent format, and doing multiple rounds to figure out how one variable might relate to another.
Imagine the ease if you could’ve just uploaded it and just ask questions. You still couldn’t do the harder part of the job though - which was to try and convince others on what you thought was sensible, or what the data actually said, or convince folks of what ideas to try and do.
All large number of white collar jobs have some commonalities. They require:
Judgement regarding collection, cleaning and analysis of large chunks of data, and relating it to other collections both in their own heads, in other people’s heads and sitting in various repositories in a company
Discussing, negotiating and arguing with others who have a similar set of skills too on what the right thing to do at any given time is, acting often under extreme uncertainty regarding both the options and their future outcomes
Dealing with the large numbers of people who all have parts of the answer, and many more opinions, to coordinate them all to getting to the answer. This also includes the necessary work to compile and explain whatever the decision is to those higher up or parallel or lower down or adjacent in an organisation and get the “yes”.
These don’t just exist as vestiges of bad data collection or insufficient analyses. They exist so we have a way to corral our uncertainties.
Each aspect of the work above is another check on the “sense” of taking an action and checking it against the data. This is an implicit argument for multiple eyes making all bugs shallow, or at least to reduce some risks of taking any action.
This is also true in the last step, where discussion and negotiation and presentation elicits chunks of often tacit information that often won’t see the light of day otherwise since they’re not included in any data collection or analysis.
In fact, imagine if you could - a thousand AI agents who do specific tasks at your bidding. They can do all of the above, collecting information and curating reports and cleaning data and helping make decisions. They negotiate with each other and argue their points of view so that we, from the outside, can use their argumentation to get to a semblance of truth.
That’s basically what organisations are.
And parts of the roles that the employees do inside the organisations will seismically shift, now that analysts are seeing part of their jobs made redundant.
The difference today is that questions similar to “hey, here is a bunch of oddly formatted data, what should we do about it” are being, in many cases, made obsolete. It’s still baby steps today, and can only really work with small and clean datasets (which don’t really exist in many places in the real world). But it will evolve, and slowly eat away at more problems which still need more context or which need different types of data.
But you know what remains? The necessity of asking good questions, the importance of understanding incorrect answers. The utility of knowing what to automate and how and with what. These are the critical questions.
We all have gained the ability to become the Pointy Haired Boss from Dilbert, but the most important character trait of the Pointy Haired Boss isn’t that he is the Boss, but that he is incompetent.
If instead we want to be better, to be productive that is, then it makes sense that we would need to know how to use them.
This means knowing, at a minimum:
How to search for information and pre-existing analyses, and know when the answers are wrong
How to set up the new tools so that for a minor upfront cost you save 30% of your time going forward
How to use that time, and freed up resources, to create even more new things to satisfy both our commercial needs and human wants
A large part of the work that goes on inside organisations is about stitching together context and extracting insight from the nous in the ether, or trying to figure out which analyses makes sense to do in order to figure out which questions, both of which are moving targets.
Which also means you’d need to know how to coax good analysis out of whatever AI systems you’re using. Whether that’s the art of prompting with LLMs, knowing how to use Copilot to code better, using Chat to educate oneself about things both practical and incredible, and understanding how your existing datasets need to be constructed or configured to take advantage of the tools.
Just like with self-driving cars, it isn’t enough for the automation to be just better than the average humans, because we’re looking to remove ourselves from the equation. It has to be much much much better. It has to be collectively better than us at our best.
It is inevitable that jobs, as we know it, will get replaced or displaced with new tools. The latest crop of AI tools are no different. It is also true that so far every wave that destroyed jobs created many more.
Can we “know” that AI will do so too? No. If indeed it doesn’t happen, we would have made a step towards the post-scarcity utopia Star Trek painted, and we would have to change our society to take advantage of that fact.
For anything that is still scarce, the economy continues much as it does today. With the need for talent and the fight to be atop the pile and the relentless human drive to create and accomplish miracles.
This is the very definition of a good problem to have! We should be so lucky.
Interesting article, as always. Thanks for sharing.
I’ve been contemplating human & AI capabilities and came up with this picture. Seems aligned to your way of thinking?
https://www.linkedin.com/feed/update/urn:li:activity:7058411863834779648/