Every non-trivial program contains at least one bug.
Murphy's computer laws
President Obama said he’d take a scalpel to the budget instead of a machete. So far, though, he appears to be wielding a butter knife
CBS News, 2011
TL;DR: All orgs start with high motivation, high agency and limited rules. Then they grow. The consequence of scale is chaos. The antidote to this chaos is properly codified rules. The consequence of having a few rules is more loopholes and goodharting, as the intrinsic motivation to be a good citizen drops over time. The antidote to this is better codified and expanded rulesets. The consequence of expansive rulesets is opacity and bureaucracy. The antidote to large expansive rulesets is simpler rules followed by motivated players at their discretion.
I - Laws are complicated, burning witches is easy
When you think of complicated and opaque setups its hard to top the legal system. Of course, it didn’t used to be the case. In fact, there used to be a fun way to figure out if someone did something naughty in the good old days. The trial by ordeal would ask the accused to pick up a red hot bar of iron or pluck a stone out of a cauldron of boiling water. If you started to heal in a few days you know you had God on your side, so all is well. Otherwise, well, too bad! But things were simple.
Life got more complicated soon after, because clearly the simple method had a couple flaws. So you had various curia helping making up laws, magistrates passing judgements. When Henry the II came to throne in the early 1100s, there were a grand total of 12 judges in the country. He chose 5 members of his household “to hear all the complaints of the realm and to do right” as the origin of the Court of Common Pleas. Soon a new system popped up. There would be some judges who'd sit in London, making up the common law that would apply across the country, common to all. The rest would travel around the country deciding cases.
Before long there were multiple courts - the King's Council, The Court of Common Pleas and Court of Exchequer. It continued to evolve over the hundreds of years since then. As did the legal system.
The common law system was an improvement on what had gone before, but it was still slow, highly technical – making procedural mistakes that could ruin a case all too likely – and vulnerable to corruption, especially when juries were used.
Time was, we used to ask our Kings to make decisions for us. Sometimes it was great, often horrible, but always straightforward.
Fortunately, those who felt they had been failed by the common law system could still petition the King with their grievances.
Even in the more complex cases, we could ask someone else to cut the Gordian knot. We do far better these days. For one thing there's more than 12 judges. There's actually 40,000 in the UK. We also, for better and worse, have reams and reams and reams of well codified laws and precedents on our side. But we still run into the problem of increasing complexity.
Even in a world where all individual rules are perfectly clear and cost-efficient, knowing how to comply could still be burdensome. An effort burden would be associated with learning all the rules, and an information burden would be associated with compiling the evidence needed to test for and comply with the rules. But beyond that, the system of rules could be difficult to navigate and predict because of the interactions between the multitude of rules and institutions administering them. Complying with one rule could require actions that make complying with another rule more difficult. Similarly, because legal rules often are interrelated through techniques such as cross-referencing and stare decisis, how one rule is interpreted and applied could affect the meaning or operation of other rules.
Sound familiar? If that sounds exactly like the problems of management you're not alone. It’s the reason it takes 7 committees to decide anything in any large organisation.
Every time we add a precedent, add another law, or adjust an existing one to conform to reality better, we're trying to essentially remove a certain part of the system from individual biases and make it fair. We try to solve for the fact that we don't want too much discretion placed amongst judges and juries by trying to find algorithmic solutions which supposedly removes bias. And yet its lack of auditability makes it susceptible to the same complaint of opacity.
Do you believe the U.S. Tax Code is too complex? If so, you are in good company—most Americans believe the Tax Code is too complex. Many legal scholars believe the Tax Code is too complex. Even the Internal Revenue Service’s own National Taxpayer Advocate Service believes the Tax Code is too complex.
That's the paradox of rules. In the beginning they were essential to basic survival over the chaos of competing demands. And soon there's enough of them that it overwhelms any system that it comes across.
II - Why we dislike simple rules
Robin Hanson has written about the unreasonable effectiveness of simple rules over allowing discretion in individual cases. He recites multiple cases where we should've chosen a rule algorithmically rather than rely on expert human judgement, including in matters of taxation, QALY based treatment choice in the NHS, hiring of experts like lawyers, or investing in actively managed funds (or worse actively managing funds themselves).
The common pattern here seems to me to be a dislike of clear formal overt rules, mechanisms, and criteria, relative to informal decisions and negotiations. Especially disliked are rules based on explicit metrics that might reject or disapprove people. To the extent that there are rules, there seems to be a preference for authorizing some people to have discretion to make arbitrary choices, regarding which they are not held strongly to account.
There's other articles that say the same thing. It looked at whether statistical prediction rules provide more reliable outcomes compared to collated expert judgement. Across a bunch of domains the rules work better. They are better for predicting job performance, for predicting criminal recidivism, for diagnosing patients on some conditions vs psychologists, predicting academic performance, for predicting loan and credit risk vs bankers, and even predicting the price of mature Bordeaux red wine.
So why aren't SPRs in use everywhere? Probably, suggest Bishop & Trout, we deny or ignore the success of SPRs because of deep-seated cognitive biases, such as overconfidence in our own judgments.
Robin says something similar, though he has a more sinister explanation.
However, my best guess is that most people mainly favor discretion as a way to promote an informal favoritism from which they expect to benefit. They believe that they are unusually smart, attractive, charismatic, well-connected, and well-liked, just the sort of people who tend to be favored by informal discretion.
The core thesis is that across a large number of areas allowing rules over individual decision making discretion means better outcomes. This seems to be true, at least directionally, whether we're talking about medicine or wine tasting.
And yet the argument is that in setting after setting we seem to prefer leaving things up to experts or discretionary decision making, rather than a rule that's shown to have superior predictive capabilities.
It's a puzzle.
III - Finding loopholes
There’s another time we have a problem with rules. We’re so often pissed at the unbending over-reliance on specific rulesets. This shows up in frustrations with almost all of bureaucracy, especially anything to do with the government, and most recently manifested in the absolute debacle that happened with our Covid related programs by the FDA. In cases like these we need there to be more discretion within the system, but we don’t allow it because we don’t trust those who would possess such discretion, and because we punish those who get any negative results.
Nobody can argue that these agencies are not rule bound. If anything they have so many rules that apply to so many situations that cover so many products that have so many loopholes that the sheer effort in following the rules is one of the major reasons why pharmaceutical innovation is supposedly stalled. Even though I am skeptical of this view that this is the reason behind the lack of innovation, the fact that there is enough paperwork to knock over a building stands true.
Whatever your arguments re the FDA or other 3 letter agencies like the FAA are the CDC or the SEC, you can't credibly argue that they are full of cowboys. In a buddy cop movie these organizations would be the equivalent of the old fuddy-duddy captain that never seems to fudge the rules even to save a life.
They are the paradigmatic examples of failure modes that come from an overreliance on rules. They're impersonal and have unintended consequences through good old fashioned Goodharting. But even with all of that, there are far too many areas where we prefer to retain decision making flexibility, because relying on rulesets require us to create rulesets that tell us how and when to change those rulesets, and that just turns out to be pretty hard.
We see this today in areas where we use AI tools, where we're using algorithms to help decide everything from criminal sentencing to tumour diagnoses. But even there the key isn't just the fact that we're using an algorithm, it's the fact that we're using an algorithm that's undecipherable.
The more charitable interpretation is that we like the feeling of having discretion in areas that are so complex, with so many moving parts, that human heuristics seem to work better than fixed rules.
We see this repeatedly. In companies trying to craft their strategies, or in in political manoeuvering, or in establishment creation within science and beyond. We see this in legislation creation and jurisprudence.
The key in these systems seems to be that allowing some discretion can mean better outcomes if and only if everyone's aligned and motivated. But we're often unable to presume such motivation from the participants, and so strict rules are the only option. "Computer says no" becomes the default because we don't trust the human to say yes or no.
IV - How rulesets evolve
Our organisations are often in charge of both setting the rules and enforcing rulesets, and the evolution of organisations show us how they can evolve over time.
Early on, intrinsic motivation is key. You see this in startups or new non-profits or new governmental departments just set up. When these organisations are new, you often start by having a tremendous amount of leeway, because you need a ton of self directed motivation amongst the participants to achieve anything. You have barely any rules, just a few goals you're trying to achieve.
Soon you start needing to create and set some rules in place. Why? As organisations increase in size, you have to start standardising your processes to increase efficiency and just plain to get things done. Even if you wouldn't want to, there's no escape from the coordination tax once at scale.
Over time, the rules get complex enough that they start to become boondoggles. Large companies start creating internal bureaucracy. Govt organisations start having both formal rules and informal norms that add up to the way the things are done. And since no small or straighforward rulesets are able to fully handle the complexities of the actual world, soon these rules start adding up they start to create layers upon layers of overlapping directives.
Patrick (of patio11 fame) has written about his experience as Stripe scales:
We got an incredibly long way on having a very small group of Stripes care passionately about e.g. product launches, but “the entire team working on this page could split a pizza” doesn’t scale to e.g. needing to localize it for 40 countries, give regulators in many of those countries a heads-up, get formal approval for quotes from enterprises with A Process For That where previously we could have just texted a startup founder, etc.
That’s an example of an organisation needing to start putting rules down.
Rules are needed because they're more accurate than using discretion, and because they're necessary to handle increase coordination challenges.
Even in large companies, when they occasionally break free of their bureaucratic past, its because the organisation oftentimes has a full overhaul and a complete rejigging of the motivation of its employees.
Or if you look at the political world.
The best recent example of someone who moved from the silly quadrant at the bottom right to a more engaged quadrant at the top might be Microsoft. After a decade of well-mocked stultification across its products and reliability, when it missed the boats of the internet and the mobile, Satya rejuvenated the organisation. Through both the actual restructuring of the company to make it meaningfully different, but also ensuring that those who worked there were motivated to actually push ahead towards publicly stated goals.
Self directed motivation as the antidote to layers of bureaucracy. It seems to be a cycle that turns with frustrating regularity. At McKinsey, our expense policy had a "red face test", basically asking would you feel embarrassed asking for reimbursement for something in front of your manager. There was no explicit rule, because the company could rely on the intrinsic honour within its employees to not break norms. It was easier, because the list of rules to enforce the same outcome with untrustworthy participants would’ve filled several books.
And yet, when I look at the rulesets that we have used for error correction over a long enough period of time, centuries, such as our legal systems, we see that at least in human domains error-correction isn't as easy as it seems, and comes with the price of an ossified bureaucracy. The only way out is to rip and replace. Sometimes we have to start new organisations.
We intrinsically expect all organisations to live forever, and therefore get upset that they're full of "crud" as they get older. Unless very tightly bounded, most organisations aren’t going to be immortal. You need to either make the employees and stakeholders sufficiently motivated that they’ll do what it takes to fulfill the mission (at the cost of some efficiency), or make it so strictly ordered that it can break through barriers (at the cost of some flexibility).
Planned senescence is almost seen as anathema, and we’re constantly surprised when an organisation turns out to be stuck or in some version of a local optima, an evolutionary cul-de-sac. We shouldn’t be, occasionally we need a machete, not a scalpel, to effect change.
Appendix 1 - A World Full Of Rules
It's worth wondering what a world designed by strict rules would look like. Luckily, we have one that we use every single day. The world is filled with systems where we have to leave plenty of space for discretion. But there is one world which works exactly according to laws, which is software.
Software only does exactly what we tell it to do. Unlike a company or a department or a legislature, a world created with software is entirely made up of rules. There is no discretion, no intrinsic expert motivation, no room for error created through cognitive biases.
After all, if there's no room for discretion, then there's no room for bias, there's no room for corruption, there's just the explicit beauty of a system doing exactly what it was designed for - no more, no less.
Even software however, once it becomes large enough, can no longer be made easily explicable. For instance Facebook, now that its codebase is easily north of a hundred million lines, requires behavioural analysis to debug. Its too large to go through individual features easily. There are way too many possibilities on how the different aspects of the code can interplay with each other.
And despite this (or because of it) the world of software bugs is amazing. Murphy's Laws here tells us that any piece of software has at least one bug, and the larger the codebase the larger the set of bugs.
To debug a system is extraordinary levels of work, requiring huge teams to work closely over a long period of time. And its not easy. In most instances, you only debug enough to make pieces of software good enough to release. But to reach success requires tremendous resources.
The closest, supposedly, is the onboard software for the Space Shuttle, where they had one bug in half a million lines of code, despite being maintained by 260 people including a large team of verifiers.
But this level of effort is difficult to muster up for every piece of software. We'd quickly drown in QA engineers.
(This is also the problem, at least as of now, with the DAOs. Either they are limited in function, in which case the impact on the world will be limited, or they have unseen bugs. It's the curse of inflexibility, of rules compounded.)
Appendix 2 - Strategies to win
There’s plenty written about how it’s really difficult to reinvigorate an old company and to instill a sense of motivation, at last enough to make up for old school bureaucracy. Which is why we need new institutions to replace to old guard. This is why when faced with the monstrosity of a government department, the only way out seems to be to start anew. For instance, in the last decade, you would easily be in the top decile of investors if you:
Bought Bitcoin or Ethereum
Bought every tech IPO
Invested in every pre-IPO software company
Bought into the "hottest" deals of the year, regardless of price
Each one of these is also a ruleset that breaks the symmetry of risk-reward. This is the counterfactual what I wrote about in Part I of Strategy Decay where minor additions to the usual diligence process increases your reward.
This is also insane. Markets are not supposed to work this way. You're not supposed to be able to make 3x, 10x or 100x your money this easily. The market is supposed to be slightly more efficient than that! (I pitched ideas 3 and 4 from the list where I work and was roundly rejected.) I love the fact that every so often there is such an egregious mispricing in the overall market that extremely simple strategies can outperform even the most sophisticated ones.
Is it because those who're doing it are prescient or they're happy to take the risk? Part of it might be survivor bias. For instance it was probably an even better bet to put money in Dogecoin, though that feels more like a gamble on Elon’s tweets. And following these strategies are not comparable to Roubini's calls for market crashes every year of the past two decades. These aren't fads, like investing in cryptokitties and GME calls, or pure long term trends like investing in fintech.
These were specific, actionable rules, well understood as having high expected value, but because people worshipped the boundaries as currently drawn, weren't taken advantage of. They're closer to well informed players disregarding complex rulesets and going back to simpler strategies with some discretion. There's limited analysis or exhaustive research on deciding exact allocations amongst the options.
For smarter people, even after agreeing in principle to trying simple strategies, they prefer complexity. It makes some of us feel smarter for having understood it, to have the ability to navigate the minotaur's maze. Simpler rules won't always work, but they're not always wrong either. In fact with some discretion in how you choose them they're kind of magic. Whether that's an invalidation of EMH or just a better way of utilising the inside view.
Simple rules feel like cheating to us. We like to meet the complexity of the world with the complexity that emerges from our minds. Simplicity feels like we're either getting lucky or we're somehow cheating the system. Its useful to remind ourselves that we're not.