On thinkers and doers
Why we might need something like tenure for many more people, to cultivate more thinkers in a doer's world
Science is organized knowledge. Wisdom is organized life.
Scientists investigate that which already exists; Engineers create that which has never existed.
Theodore von Karman
The technologist is different from other people in that he does not accept what is given but wants to find out for himself how things can be done better.
W. Edwards Deming
There are thinkers and there are doers.
Oftentimes the thinkers do things, and doers have to think about what they’re doing.
But the distinction is helpful, as dichotomies so often are, to illustrate something fundamental about the way we approach the world.
One place we see this is where there is a ton of conversation around the difference between science and technology. We try to draw arbitrary distinctions and relationships amongst them to justify what creates what. Did the creation of tooling enable the development of science through better measurements? Did the development of the theory of electromagnetism enable the development of communication cables?
We’ve had technology development inside science labs, like Bell or even DARPA, and we’ve had science development inside technology companies, like most of AI.
Qualifying the differences are easy at the extremes - theory of relativity from a patent office vs developing Pagerank at Stanford - even though the locales are opposite to what you’d expect. Towards the middle it starts getting muddled - Shannon’s information entropy, von Neumann’s work on implosion type nukes, creation of the internet.
Thomas Kuhn tried to crack the issue by looking primarily at science, and trying to delineate between the “big leaps” that led to entire new paradigms being created, vs the “little leaps” that cumulatively helped colour in after a big leap was completed.
One difference is, scientists tend to care about impact of their work, whereas engineers tend to only care if their solution is practical. A great example of this was the development of aircraft in WW1, where the Wright brothers were clear that they wanted something that could be used for travel, not just for war.
Another key difference seems to be motivation - scientists are motivated by discovery and understanding, whereas engineers are motivated by application and use. Scientists often tend to develop technology that is used by other scientists in their research, whereas engineers develop technology for use by people in the real world. Of course, this isn’t always the case - plenty of scientists are motivated by application and use, and plenty of engineers are motivated by discovery and understanding.
Technologists are always looking for ways to make things better, faster, easier or cheaper. It's what we do.
But there is yet another difference, not exactly clear-cut, but at least somewhat translucent.
For science to progress means you need the great imaginative leaps to take you from one paradigm to the next. And whether you believe ideas are finite fruits to be consumed, great man/woman theory of idea capture, the belief that genius is a tutelary deity that came to you, or continuous error correction being our glorious future, you still want the feeling that the theory of relativity, the discovery of DNA or the theory of evolution gave us, of having finally cracked open the door to a new part of our universe.
Scientists tend to develop technology that is unique, and engineers tend to use existing technology in new ways. Scientists invent the wheel as they need it - Aristotelian physics vs Einsteinian physics means creating a completely different set of tools for research. Engineers don’t always have time/budget/impatience so they will re-use what already exists where possible, sometimes adapting brilliant hacks from other disciplines to solve their problems - like using search algorithms on huge data sets, or using control theory used by astronomers controlling satellites with joysticks designed for assessing human strength in cars manufactured decades ago.
The technologist on the other hand looks around and sees what exists around him, and uses that to create the future, taking the tools to their logical conclusion.
This is also Elon Musk’s legacy. He’s a doer.
Elon looks at something he wants to accomplish, and as long as existing knowledge is able to create what he wants theoretically, acts as an individual Schelling point to coalesce money and talent around, to create them.
When you’re looking at this from the outside, it can seem like magic, and him our generational genius. Seen from the inside, I imagine it feels more like taking a bet on doing things others haven’t really done, and doing them so often and so well until you can do them.
Which is why he has undeniably succeeded in doing incredibly hard things, things that many (most) said were well nigh impossible.
Built an EV car company, operating at scale
Produced batteries for those cars at scale
Launched brand new self-made rockets to space, at scale, including catching them on a ship when they fell back from the sky!
Bored tunnels cheaply
These are incredibly hard things. And he brought people, talent, smarts, money, ingenuity, resourcefulness and oodles of capital to make this a reality!
Things that he has not done, for which he gets flak, are areas which are not purely dependent on “doer” energy, these are things that require thinkers, some sort of step change in our ability.
Autonomous driving is still wildly incomplete/ buggy/ murderous depending on whom you ask
Brain-computer interface to let us think to our laptops barely works
And alas, we know of no way to throw resources at one end and get thinkers at the other end.
The real difference between the two is that iron willpower and huge sums of money could move the first set into reality, because it did not require fundamental paradigm shifts. Whereas the latter requires big leaps to be made, and the leap, being invisible to us beforehand, can’t be created at will with just money and effort.
Note that these are equivalent to the “normal science” model of cumulative progress and conceptual continuation, on climbing up the steeper parts of the S curve, rather than something wholly new.
A good scientist is a person with original ideas. A good engineer is a person who makes a design that works with as few original ideas as possible. There are no prima donnas in engineering. ~Freeman Dyson
It takes uncommon skill and ability to push that we’ve made to its extremes. The mental clarity to see where we are and how far we can go. The rare courage to actually do what others only talk about.
Elon took some risks that no one else was willing to take, because they were too afraid but its not just that. After all Blue Origin was founded two years before SpaceX. He actually pushed through with insane speed and bringing capital and talent together to do things.
And sometimes those risks pay off handsomely, as with Tesla and SpaceX. And sometimes they haven’t so much, as with SolarCity and The Boring Company.
This isn’t true just for a singular industrialist, its a microcosm of what we’ve been through as a society.
An example where we’ve collectively put in uncommon levels of effort to make progress is microchips. Like Moore’s law, where the quest for faster and smaller chips led to a near doubling of speed/density every year or two. It made possible the laptops we have today. But it was not a paradigm shift - but making what we had better, faster, cheaper.
The other category is in true paradigm shifts - where there has been no way to do something up until now because we lacked the tools, or more often than not, the imagination. They are also by far rarer, though not harder, things to achieve. Whether this is powered flight (Wright brothers) or the splitting of the atom (Einstein + Rutherford).
You could quibble about whether these were really technological achievements as much as scientific ones - but I think that misses an important point which is that all of them required technology in order to be achieved.
The height of technologist power is to take the scientific inventions that have been created thus far and squeezing all the juice out of it. As an embodiment of what David Deutsch said.
Everything that is not forbidden by laws of nature is achievable, given the right knowledge.
Elon, seen this way, is the Einstein of technologist doers. There are many who do similar things, but he's the one who pushed the boundaries.
What this tells us is that there clearly is a societally acceptable way to go solve problems that everyone thinks is too big. That enough capital and focus and dedicated effort can swing the pendulum.
But solving climate change via geoengineering? Better housing? Creating an alternative to meat? Education at scale? Supersonic jets? These seem amenable to technological solutions without requiring fundamental scientific breakthroughs.
Its possible to do all of these, and be seen as the modern archetypal genius. All it costs is this.
And while we have done pretty well with respect to doers in the last few decades, we’ve done much worse with thinkers. Our world, since the second half of the previous century, has in fact moved to being mostly about doers and not thinkers.
And these are the doers whom we regard as geniuses in our modern era.
This is what Erik Hoel called out as a failing of the education system. It’s also been blamed on our general move towards decadence (Douthat), lack of investment in energy making us optimisers (Josh Hall), needing an anti-PhD program (Dwarkesh), the tenure system, sclerosis of academic institutions, sclerosis of our whole culture, focus on never being wrong and many many others besides.
What's undeniable though is that Science has had this shift as well, moving from thinkers to doers.
If you look at the big discoveries we’ve seen in the past few decades, there’s an over-preponderance of doer energy rather than thinker. Examples here, from my friend Sam and others, the development of vaccines, mapping of the human genome, and the discovery of new subatomic particles, LHC, Sloan Digital Sky Survey, Census of Marine Life (Sky Survey and Census both funded by Sloan Foundation), Framingham Heart Study, LIGO.
These are all examples of bringing together resources like money and talent and willpower to do hard things. They are testaments to the wonders we can accomplish.
These are also, almost categorically, not examples of inspiration striking someone and paradigm shifting scientific discoveries.
These are exhibits of doers making things happen within an existing paradigm. And the reason this is weird is simple, which is that 90% of all scientists who have ever lived are alive today.
At the turn of the previous century there were about 10 physics PhDs awarded per year. This is the time of Einstein, who created his special theory of relativity immediately after submitting his doctoral dissertation and while working (happily) as a patent clerk. By 1930, around the time John von Neumann completed his doctoral dissertation in mathematics, there were around a 100 PhDs awarded each year. Today its around 5000.
I think it all changed around the big bang event for big science which was the Manhattan Project. This was when science was yoked to big expenditure, clearly articulated goal, and bringing together the best minds to solve a unique problem. Its success begat a thousand other attempts at the same thing.
On 9 October 1941, President Roosevelt approved the atomic program after he convened a meeting with Vannevar Bush and Vice President Henry A. Wallace. To control the program, he created a Top Policy Group consisting of himself—although he never attended a meeting—Wallace, Bush, Conant, Secretary of War Henry L. Stimson, and the Chief of Staff of the Army, General George C. Marshall. Roosevelt chose the Army to run the project rather than the Navy, because the Army had more experience with management of large-scale construction projects.
Science was no longer passion projects that patent clerks messed about with in their spare time, or interesting offshoots of preternaturally smart aristocrats, or the necessary consequences of tinkerers and engineers trying very hard to build something anew. It was a golden age.
The period 1950-1970 was a true golden age for American science. Young Ph.D's could choose among excellent jobs, and anyone with a decent scientific idea could be sure of getting funds to pursue it. The impressive successes of scientific projects during the Second World War had paved the way for the federal government to assume responsibility for the support of basic research. Moreover, much of the rest of the world was still crippled by the after-effects of the war.
Its outgrowth includes the giant research laboratories that exist inside big businesses, the 17 labs inside the US Energy Department, the large experimental setups like the LHC, new government agencies like National Science Foundation to redirect funding, and many many more.
The fundamental conceit here was that through careful planning and talent curation we will be able to project manage and spend our way into the next era of innovation. Turned out that unleashed a whole different slew of innovation than which we’d envisioned.
And this was entirely successful! I’ve consistently maintained that we’re dealing with the fruits of that success, not of the failures of an attempt. Because in addition to the benefits of these investments, including the internet and experimental fusion reactors, we are also faced with exceedingly sclerotic institutions, bureaucracy and the general ossification that comes at the other end of institutionalisation.
And what we’ve lost in the post Manhattan Project world is little science, the actual home of the thinker.
Little science is science as practiced by pretty much every major scientist a child studies in school - Einstein, Newton, Maxwell, even Tesla and Edison. Little science is the science of artisanal tinkering, of interdisciplinary exploration with no agenda, of following your curiosity without needing to get permission, of small experiments done in a garage.
Whether this is Feynman’s Cargo Cult Science, or the ideas regarding creativity in science espoused by everyone from Asimov to Einstein, little experiments are what gives rise to the entire field expanding. As Hamming said in 1986, at Bell Labs.
When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go.
And we aren’t encouraging the planting of those little acorns at all. And moreover, we’re actively getting in the way of them.
Those today who might stay at the top of the thinker sphere, say Terence Tao or Ed Witten, are decidedly not household names as their counterparts used to be. I don’t know who’s doing pioneering work in molecular manufacturing but chances are someone is, and their theoretical work will create breakthroughs in a decade or five. But we seem to have forgotten how to give props here.
We have systematically discouraged ambition just as we have institutionalised it. We try to have grand goals and attempt to reach them through institutional heaviness. We try and force work onto problems that seem important rather than working on problems where we have a chance of making a difference.
It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense. The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn't believe that they will lead to important problems.
In our hunt to solve the big problems we have created big machinery to attack it. And in doing so we’ve lost track of the little acorns that might lead to big oak trees in the future.
The answer is not that we don’t need Manhattan style projects anymore. It’s that they should not be the only type of undertaking.
We should absolutely have large scale projects to achieve goals where we have a vector of attack. Like the human genome project, or longevity moonshots, or BARDA. But we also need to reinvigorate the bottom up curiosity seeking innovation that actually helped push science forward.
We’ve moved past the era of Big Science, emergent from the era of citizen Science, back to an era that demands a return to little Science. We have successfully spent the best part of half a century trying to shore up the baseline for science, and in doing so seem remarkable success, but now we’re hitting the limits of that strategy.
Unlike the good old days when scientists weren’t plentiful and following ones curiosity was sufficient to travel the limits of human knowledge, what is actually needed?
Well its obvious. We should try and encourage individual curiosity, diminish the worrying about individual bad studies, and let people try to accomplish things the way we used to.
Turns out in academia we have a solution to this. It’s called tenure. And I think an idea should be that we open it up to many more people.
The whole reason tenure is interesting is because its one of the only creative professions where job security is assured first as a way to encourage higher risk seeking. Does it still work as intended? The jury is most definitely out on that question, but it did work in the past.
But one reason why it doesn’t seem to work is because of the gamesmanship required to get it in the first place, which substantially reduces the point of having gotten it.
The historic method that all the folks we talked about got their leisure through was a combination of secure social stature and sufficient income to devote to their thoughts.
What we need is to accept that the education and efforts of the past several decades have done its work and stop focusing on trying to make everyone conform to a known path, and instead encourage much more “explore” vs the pure old “exploit”. We’ve succeeded beyond our wildest dreams on doing and it would be helpful to let the pendulum swing a little back towards thinking.
Instead of a lifetime appointment, which might very well be prohibitive, we should have, say, five-year appointments available by the hundreds for smart people to spend their time. Getting in can be roughly as hard as convincing a group of relatively smart folks to give this person a chance. Not necessarily through onerous application processes or credentialism, but through an (independent and double blind) method that has three characteristics:
An idea of what to work on
A reasonable path of attack
A list of folks they think will work with them
Not just for twenty year old university students or grizzled academia veterans, but open to everyone. Let the opsimaths bloom just as the precocious young polymath.
The single biggest commonality that every thinker you’d consider as a generational genius has had is room to think. Sometimes enforced, like with Newton and the plague years, sometimes chosen, like Darwin and his extended voyages, and sometimes through creative work life balance, like Einstein during his patent clerk years. Regarding Darwin in particular, looking at his daily habits, we find:
his days don’t seem very busy to us. The times we would classify as “work” consist of three 90-minute periods. If he had been a professor in a university today, he would have been denied tenure. If he’d been working in a company, he would have been fired within a week.
It’s the same with Henri Poincaré, with his 500 papers and books spanning most aspects of mathematics, physics and philosophy.
Toulouse noted that Poincaré kept very regular hours. He did his hardest thinking between 10 a.m. and noon, and again between 5 and 7 in the afternoon. The 19th century’s most towering mathematical genius worked just enough to get his mind around a problem—about four hours a day.
The key here isn’t that these thinkers didn’t work hard. It’s that the work they did was directed solely at what they thought was most valuable.
And it’s the same in other fields where thinking is important as well, whether that’s Charles Dickens, or Emile Zola or Thomas Mann.
What about today’s PhDs or early-tenured faculty, and their research agenda? The stories have been well told multiple times, so I’m not going to do another deep dive here, but as understood that their lives stand in stark contrast to Poincaré and Darwin above. Or even to the two year shut-in studies of Newton. Today they are primarily stories of burnout, vagaries of peer review and incessant pressures to publish.
The space to explore your original ideas is the most useful gift of all, and that’s the hardest to gift. Prizes and grants are helpful, but they are stopgap measures until the next prize or grant, and then too is dependent on you knowing how or wanting to apply in the first place. I’m not sure that is congruent with what we want to achieve either.
The only way out from this rat race I can see is to re-create a safe space. When you are no longer consumed by the worry that is career sustenance and growth, at least a subset of folks might choose to work on what they think is important. Signalling will never fully die of course, but perhaps we can dampen it.
Its not a pure argument to reduce the number of hours worked, though that might be a result. Its a more holistic push to reduce the pressure to pursue short-term distractions to keep the red queen race going.
This space has shrunk dramatically in the era of big science and is what we ought to bring back. The flip side of having moved decisively to a world where doers are the undisputed rulers is that we forgot how to create the conditions for thinkers to excel. Whether its the pressure to publish or perish or the increased competition from increased scrutiny, creating spaces to explore.
As we have increasingly favoured the doer part of the spectrum over the past half century or more, we’ve pushed ourselves to do things ever bigger and grander. And our estimation of those who manage to do this is also greater, our modern geniuses. But in doing so we forget that the other end of the spectrum doesn’t function under the same pressures. It needs space and leisure to let ideas bloom, and for people to spend time in exploration. We need to find a way to give this space, with enough cachet and prestige that it encourages people to dream. To think!
if you imagine taking a detective from the 19th century, teaming him up with a detective from the late 20th century, and giving them this problem to work on: that a suspect in a crime was seen one day to be walking down the street in the middle of London, and the next day was seen somewhere out in the desert in the middle of New Mexico. Now the 19th century detective will say, “Well, I haven’t the faintest idea. I mean it must be some species of magic has happened.” And he would have no idea about how to begin to solve what has happened here. For the 20th century detective, now he may never know whether the guy went on British Airways or United or American or where he hired his car from, or all that kind of stuff, he may never find those details, but there wont be any fundamental mystery about what has happened.
Douglas Adams, 2001
The difference between Douglas Adams’ 19th century detective and 20th century detective is that they’re living in different paradigms. The 19th century detective needs to upskill his entire knowledge pool, whereas the 20th century detective needs resources. The former is a question of inspiration and knowledge, the latter is a question of resources to gather information.
Noting here that there is significant debate about AGI development, towards and including foom scenarios, which might come about as a result of moar processing power instead of new paradigms.
Venture dollars keep hunting doers too, by the way, not thinkers. This is as it ever was.
I don't think this is true for free speech, for what it's worth, because the problem can barely be defined. We've been trying for a few centuries.