Discover more from Strange Loop Canon
The agent principal problem
"Show me the incentive and I will show you the outcome."
I wrote a book on AI development, exploring how it works, its history, and its future. Order it here. And drop a review if you’d be so kind!
You hire a contractor to fix your house, his incentive is to waste your money while yours isn't. You hire a financial advisor, her incentive is to get more fees. You get an investment advisor, they also want to get fees, plus a cut of the profits.
This is the principal agent problem, where the agent knows how to do something and often knows more, and you have to figure out how to trust them. Often, the biggest worry is getting ripped off, because the agents don’t mind if you lose your money or if you lose your floorboards, as the way they get paid is weighted towards the upside. The agents have a floor in terms of downside exposure, while principals like you want to be more prudent with money.
Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.
Adam Smith said of it in 1776:
The directors of such [joint-stock] companies, however, being the managers rather of other people’s money than of their own, it cannot well be expected, that they should watch over it with the same anxious vigilance with which the partners in a private company frequently watch over their own. Like the stewards of a rich man, they are apt to consider attention to small matters as not for their master’s honour, and very easily give themselves a dispensation from having it. Negligence and profusion, therefore, must always prevail, more or less, in the management of the affairs of such a company.
But most of the economy is not like this. Because agents aren't sole actors. They are often organisations, with big teams and its own internal dynamics. Such as:
People in the organisation want to get promoted and do things aligned with this goal, which might be orthogonal to everyone else.
People in the organisation therefore want to be liked and seen as capable. Maybe even action oriented.
People have differing points of view on what needs to be done and fall into fads, including having organisational internal fights for recognition.
This is a huge dampening factor that stops large organisations from being all that innovative. It pushes instead towards indecision, vacillation, and risk aversion.
What this creates is a dynamic opposite to the usual worry about agents, one we don’t talk about much. One where principals might want more risk to be taken, but the agents don’t.
This is because even though the Agent might be misaligned in one direction from what the Principal wants, the Agents who work below make the entire enterprise often shoot in the other direction.
I call this the Cautious Steward problem. Or, if you’re ok with a chiasmus, the Agent-Principal problem.
The original thesis is known as the principal agent problem.
The principal–agent problem refers to the conflict in interests and priorities that arises when one person or entity (the "agent") takes actions on behalf of another person or entity (the "principal"). The problem worsens when there is a greater discrepancy of interests and information between the principal and agent, as well as when the principal lacks the means to punish the agent. The deviation from the principal's interest by the agent is called "agency costs".
If you want to understand the modern corporate world, this isn't a bad starting point. Michael Jensen’s seminal analysis of it in 1976 is a great example. And whether it's investment companies or companies or government organisations, this principle applies widely.
Sometimes people with money or people who own stocks in a company want the company to do well. But they don't have time to run it and make it do what they want. So they hire people to go do it.
But once you hire them and they're running things, how do you know they would do what you want? Because you might want them to grow the company for the long run, and they might want to get paid as much as possible while not doing much.
The thing is, they know what's going on far better than you do, so you're basically at their mercy. “Look, the market tanked, not my fault,” they'd say, or “look, I didn't know the company we acquired had such a spendthrift CEO, he seemed trustworthy”. And you have to believe them, because what else is there to do.
So we found workarounds like employee stock options and measuring agency costs and much much more. This seemed to work, and we got this.
Kahn and Sherer (1990)find that better evaluations of white-collar office workers were achieved by those employees who had a steeper relation between evaluations and pay.
The principal-agent problem in public services can be solved by performance monitoring and targeting, but due to the specific nature of the output of service agencies and the form of remuneration offered to employees, these strategies are unlikely to result in superior outcomes for the principal.
There is very little correlation between performance pay of CEOs and the success of the companies they manage.
It's perhaps best seen in the case of investment firms. The biggest difference that Principals are worried about are that the Agents will tank their investment. Rightfully so. You want them to take care of it as theirs and not take crazy risks because they have nothing to lose and a lot to gain.
Anyway there's a whole field of contract design that tries to understand this better and to make it work. From optimal measurements of how to pay people, to figuring out how much to monitor them in doing their tasks. Like the old Charlie Munger quote.
Show me the incentive and I will show you the outcome.
There's all sorts of problems with this of course. We get things wrong in setting goals, we get it wrong in monitoring people, we get it wrong in measuring the actual outcomes, and we get it wrong in even knowing if we know any of the above.
That's the principal agent problem. We've been struggling with it since we had principals and agents, which is basically forever. I won't say we figured it out but we've found an equilibrium (or two).
This works as advertised as long as there is significant information asymmetry between the Principal and the Agent. But it also requires something else, the fact that the Agent has to be able and willing to do the thing so the Principal doesn’t cut them off. The Agents not only have the power to mislead the Principal, but they also have to be clever about not doing this too obviously.
But the mirror problem is the more interesting one, and insidious in our culture: the agent principal problem. Where the previous problem is how to stop your agent from just screwing you around and taking too much risk, this one's about how to incentivise them to take more risk, because they don't take enough.
The agent principal problem is where the agents actually take no initiative nor actively engage in their work, because the cost of doing so is much higher than any potential reward.
This didn’t happen as much in the past. It’s not because the rewards got smaller, quite the opposite, it’s that the costs got higher.
For instance, try opening a bank account or getting a housing loan when you’re self employed, because banks don’t understand how to deal with it, and mostly you don’t get to talk to anyone who can help you out. The “bank” per se might want to help you out, their shareholders surely do, but the “banker” does not.
And why is that? If you were able to get in front of the shareholder who would profit from your decision, you might still get to open your account. But if you’re talking to a computer, or talking to a front office person sitting in the bank branch who’s authorised to do little beyond type things into a computer, there’s no escape.
As organisations got bigger, we had to increase automation and hiring more people. And that's why we get so many instances of “computer says no”.
So, the two reasons:
You now have an interface between you and those who make the decisions, and that creates a strict barrier, and
Being further away from doing the actual job sometimes means you're divorced from the outcome, and what doing a “good job” means has changed
Which also means with increasing scale and complexity means we have many many more hard interfaces than soft. By putting acceptable risks in a box, the employees are made risk averse. And that makes us miss the opposite problem to the one we most worry about, where multiple people become too risk averse once they're hired.
Being a counterintuitive problem, people found ways to solve this problem in very specific settings, somewhat successfully? Matt Levine had an article about Sam Bankman-Fried and his time at Jane Street, discussing how Jane Street trained its traders into taking more risk than they would have otherwise.
Jane Street, after all, is not betting its entire bankroll on any one intern, or trader, or trading team. Jane Street has a diversified portfolio of (it hopes) independent positive-expected-value bets created by different traders. If one trader bets her whole bankroll on a trade that is good in expectation, and it blows up and she loses everything, that’s fine for Jane Street: They have lots more traders doing bets like that, and in the long run the good bets will make more than the bad ones lose.
This is the same point about Venture Capitalists and their portfolio. Each CEO only has one company, so she needs to ensure it performs adequately, and is often highly motivated to avoid crazy risks. From the VC point of view, they want every company to go as fast as possible to grow as big as they can.
But these solutions are also rare, precisely because unlike what we expect, the challenge in most normal situations is not to stop the Agents lower down from taking wild risks, it’s to stop the Agents from being so risk averse they end up doing nothing.
Let's take the venture capital example again. Yes, so on the macro scale there's the worry that once the LPs entrust the money to the fund they might invest it willy nilly and waste it all. We’ve had plenty of evidence for this thesis in the past few years.
But what if you're one of the investors in such a fund? You have to justify your investment thesis to the others. And in so justifying will end up being quite a bit more risk averse, because it's not your money but it is your reputation. You don't get the benefits of the portfolio upfront, only eventually if you get there. Meanwhile you're a cautious steward, incapable of taking the risks you were hired for.
Anyone who's been an entrepreneur and tried to raise funds knows what I'm talking about, as does anyone who has worked at a fund. Or at most jobs, for that matter. Or dealt with the many variations of computer says no that we all treat as the modern way of life.
Once you see it, you notice it everywhere. Banks, governments, large companies, school districts, insurance, health provision.
Whereas the agents who have agency seem to abuse the principals through taking crazy risks, the agents who don't have immediate agency go the exact opposite direction.
This explains why you see CEOs taking big risks and pursuing large acquisitions which most likely might fail, like AT&T, but you don't see someone lower down taking initiative to help you out, like AT&T.
When we talk about scientists not embracing risk or take on more ambitious projects, it's the same phenomenon. They're reliant on grant funding and Scott Alexander's take about IBS to do a small experiment gives a glimpse of the hard interface creating a natural barrier.
This explains why Patrick McKenzie writes that banks today are prettily done up sales shops, and don't have the actual decision making authority that banks of yore used to have.
In the old days this was still somewhat solvable, because there weren't many agents. They could rely on the softer interactions of one person talking to another, instead of the harder interfaces that a set of rulebooks or computers are able to give us.
The agents who sit below the Agent don't really get to be part of the decision making and their incentives are to push the risk away rather than take the gamble and embrace risk.
As we’ve gotten larger as a society, more organised, we’ve also become more competent and more risk averse. It’s useful to think of it not as individual moral failing, but rather the natural result of a very particular equilibrium we’ve accidentally stumbled onto.
With a lot of work, like with Jane Street for instance, we’ve seen we can climb out of this place. After all, if we don’t even know where we are, we can’t get out!
Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.