This is in part the problem that professionalization is supposed to solve. In occupations where workers have a great deal of independence, the actual work requires individual discretion, and outcomes are difficult to measure, the idea is that you can trust people when they have overcommitted to the public practice of a certain kind of work culture. Accountants will do accounting good not just because of within-org incentives, but because they are part of a culture of accountants where inter-accountant competition about the weird obscure things accountants care about matter more than material reward. You can trust accountants, in a certain respect, because you know that when it comes time for the annual dinner celebrating St. Spreadsheet, that your personal accountant will want to brag about how good she was at the arcane features of accounting. Same with professors, doctors, coders, etc. Admittedly, this is a really functional account. But when professionalization was happening in the late 18 and early 19th centuries, people explicitly said "hey if we make this thing a CULTURE people will be more likely to trust us when we do it."

The problem is that these cultures themselves have an agent-principle problem. They often end up orienting professional clout not around doing-the-thing-we-ultimately-want-to-do but doing-the-thing-that-distinguishes-us-the-most-distinctive-way-possible. Sometimes this is framed against market success entirely. Thus poets don't write pleasing poetry. They write the kind of poetry that will best impress other poets. REAL poetry exists outside the market, and so in some ways market success is a mark against a poem being good qua poetry. Thus academics don't necessarily seek public approval, or even a 'correct' view of the world, but instead seek to win at internal academic status games.

One of the good things about business and start up culture I've seen is that the internal professional competition does seem to honor more risk taking than the other professions I've experienced. May still be less than optimal--but certainly more than poets!

Expand full comment

Historically, the principle-agent problem was solved through religion and gossip. Religion in that there were really believed costs to your immortal soul, and gossip because populations were much smaller and less mobile so word got around about the bad actors.

The agent-principal problem is an interesting inversion. Isn't it more easily solved through performance based compensation? In sales, what did you sell? In software engineering, what did you code and how fast is it? In teaching, are your kids performing at or above grade level? None of these are perfect, but some number of measurable results along with some common sense evaluations by managers seems to work well.

Expand full comment
Nov 10Liked by Rohit Krishnan

> You now have an interface between you and those who make the decisions

This is a real and major problem, but I'm actually not sure it's the primary contributor to the agent principal problem.

This post doesn't quite articulate *why* agents become risk averse. In my experience, it's because blame is immediate but credit is slow. People optimize for not losing their jobs/pensions/etc. rather than doing the thing. Solving this requires delaying/mitigating blame and/or accelerating credit. That's especially hard in long-payoff bets like scientific research.

Expand full comment
Nov 12·edited Nov 12Liked by Rohit Krishnan

Hey Rohit, this is a great post but I'm a bit confused.

You've quoted the P/A problem as being : "... the conflict in interests and priorities that arises when one person or entity (the "agent") takes actions on behalf of another person or entity (the "principal"). The problem worsens when there is a greater discrepancy of interests and information between the principal and agent, as well as when the principal lacks the means to punish the agent. The deviation from the principal's interest by the agent is called "agency costs".

Nowhere in that definition is there a mention of agents taking high risk vs low risk, or the directionality of the agent vs principal.

I would say that whether the agent is taking excessive risk (which you define as "principal agent") or low risk (which you have defined as "agent principal") - these are both just "discrepancy of interest" based on the original, quoted definition of the PA problem.

So the low-risk taking is still basically just the principal-agent problem? because agents are still being incentivised in a way that doesn't align with what the principal wants?

Also what are your thoughts on the principal/agent problem in VC being significantly worse compared to the principal/agent problem in employment in big companies? I give my takes here:



Expand full comment

Great post!

Is there a chance that this is cyclical?

Expand full comment

This triggered me.

Expand full comment

This is very intriguing. I believe that Big Corporations are the reason most humans (who are mediocre) can climb social ladders. As a side outcome of that, I think the Agent-Principal problem comes up. When you have highly competent people working for you, you should fear the Principal-Agent issue. Now with mediocre people, it is the other way, I guess.

(My post on Big Biz and Human progress here: https://carnivas.com/big-biz-and-human-progress-57439d6796a7)

Expand full comment

This was a great post. Do you think most organizations should embrace more risk and give agency to more people?

Expand full comment

Interesting, thanks Rohit. This lack of decision-making at lower-level should be related to centralisation of power/capital, no? Ever more concentrated power means ever less decision for the agents. Some creative incentive rules may somewhat compensate, but that is like fighting gravity.

Expand full comment