How to deal with principal-agent problems, how systems get stuck in inadequate equilibria[1], how to get out of them, when you should/shouldn’t expect to be able to beat the market, and so on.
Paul cofounded the startup-incubator Y Combinator, so most of his advice is tailored to companies much smaller than 1000, but not all of it.
In The Power of the Marginal he talks about the importance of leaving people slack in order to inspire the most effective kind of work that doesn’t have to compromise between several criteria.
Do Things That Don’t Scale is mostly applicable to startups, but even medium-sized companies need to do things urgently or imperfectly sometimes.
In How to Lose Time and Money he posits that “fake work” is the main way people lose money because we already have hard-wired alarms for luxury expenditures but no alarms for things that look like work (virtuous, legible token-results) and doesn’t actually achieve anything.
Ben’s one of the founders of Wave, which is now a ~1000-employee company, and his blog is gold. Alas, I think it mostly focuses on startup-culture norms, but a lot of it is still applicable.
Scott Garrabrant has an excellent taxonomy of various ways a system could be goodharting and therefore failing to achieve its true objectives.
The larger the company grows, the greater the risk that it ends up goodharting on what’s legible, what signals effort/due-diligence.
“The exertion of effort is deemed morally admirable (Studies 1–6) and is monetarily rewarded (Studies 2–6), even in situations where effort does not directly generate additional product, quality, or economic value.”
Parallel vs serial problem analysis. Some problems that are inherently serial cannot be sliced up and distributed to several employees. Effective institutional leadership means recognising which problems are and aren’t parallelisable, slicing them up to make them more so, and avoiding deadlock as a result of inefficiently coordinated program slices. There’s just a bunch of computer science related to parallel processing that’s applicable for thinking about institutional design (not applying them blindly, of course).
Technical debt, although if you’re a software company you’re probably well aware already. The point is just that technical debt (or “coordination debt”) is very applicable to institutions as well, because the activation cost to new coordination schemes rapidly increases the larger the company grows.
This is just a Nash equilibrium where everyone has incentives to perpetuate the status quo. Specifically, it’s an equilibrium that the system is stuck with even if everyone learned that there existed a highly-certain-but-hypothetical Nash equilibrium that’s better for everyone—they just can’t easily switch to it because it requires simultaneous coordination.[2]
Thus we have the concept of “coordination activation energy/thresholds” which is an one-time upfront cost you have to pay to reach a higher Nash equilibrium. Perhaps the best and severely under-utilised tool I know of in order to overcome activation thresholds for coordination is the idea of assurance contracts (wikipedia).
An Assurance Contract is a contract of the form “I commit to X if Y other people do the same”. for example, “I commit to come to a protest if 100K other people make the same commitment”. if less than 100K sign this contract, it has no effect. if 100K or more sign it, it goes into effect and everyone who signed it is expected to come to the protest.
If any of the institutional problems the company faces is the result of an inadequate equilibria that can be plausibly be addressed with an assurance contract, I’d be happy to help put you in touch with people who know more about it and (optionally) share my ideas for how to practically go about it.
I’m conscious of it being a mid sized corporate - so I need to keep it pretty simple. I’m focusing on helping them improve their expected value calculations as they pursue new products and features. They call EV ‘predicted ROI’, which reminds me the importance of using their language and avoiding EA/philosophy language.
Next steps: I’ll look for an academic partner to help create a robust study.
Interesting! Do you know the content of the complaints that people were making about the existing decision-making practices?
In general, some resources I’ve grown from reading about related to IIDM:
Inadequate Equilibria
How to deal with principal-agent problems, how systems get stuck in inadequate equilibria[1], how to get out of them, when you should/shouldn’t expect to be able to beat the market, and so on.
Paul Graham’s blog
Paul cofounded the startup-incubator Y Combinator, so most of his advice is tailored to companies much smaller than 1000, but not all of it.
In The Power of the Marginal he talks about the importance of leaving people slack in order to inspire the most effective kind of work that doesn’t have to compromise between several criteria.
Do Things That Don’t Scale is mostly applicable to startups, but even medium-sized companies need to do things urgently or imperfectly sometimes.
In How to Lose Time and Money he posits that “fake work” is the main way people lose money because we already have hard-wired alarms for luxury expenditures but no alarms for things that look like work (virtuous, legible token-results) and doesn’t actually achieve anything.
Ben Kuhn’s blog
Ben’s one of the founders of Wave, which is now a ~1000-employee company, and his blog is gold. Alas, I think it mostly focuses on startup-culture norms, but a lot of it is still applicable.
Be impatient and “just deploy”, err on the side of action because “time kills all deals” and attention is your scarcest resources.
Other essential concepts:
Scott Garrabrant has an excellent taxonomy of various ways a system could be goodharting and therefore failing to achieve its true objectives.
The larger the company grows, the greater the risk that it ends up goodharting on what’s legible, what signals effort/due-diligence.
“The exertion of effort is deemed morally admirable (Studies 1–6) and is monetarily rewarded (Studies 2–6), even in situations where effort does not directly generate additional product, quality, or economic value.”
Parallel vs serial problem analysis. Some problems that are inherently serial cannot be sliced up and distributed to several employees. Effective institutional leadership means recognising which problems are and aren’t parallelisable, slicing them up to make them more so, and avoiding deadlock as a result of inefficiently coordinated program slices. There’s just a bunch of computer science related to parallel processing that’s applicable for thinking about institutional design (not applying them blindly, of course).
Technical debt, although if you’re a software company you’re probably well aware already. The point is just that technical debt (or “coordination debt”) is very applicable to institutions as well, because the activation cost to new coordination schemes rapidly increases the larger the company grows.
This is just a Nash equilibrium where everyone has incentives to perpetuate the status quo. Specifically, it’s an equilibrium that the system is stuck with even if everyone learned that there existed a highly-certain-but-hypothetical Nash equilibrium that’s better for everyone—they just can’t easily switch to it because it requires simultaneous coordination.[2]
Thus we have the concept of “coordination activation energy/thresholds” which is an one-time upfront cost you have to pay to reach a higher Nash equilibrium. Perhaps the best and severely under-utilised tool I know of in order to overcome activation thresholds for coordination is the idea of assurance contracts (wikipedia).
If any of the institutional problems the company faces is the result of an inadequate equilibria that can be plausibly be addressed with an assurance contract, I’d be happy to help put you in touch with people who know more about it and (optionally) share my ideas for how to practically go about it.
Thanks Emrik, I’ll check some of these out.
I’m conscious of it being a mid sized corporate - so I need to keep it pretty simple. I’m focusing on helping them improve their expected value calculations as they pursue new products and features. They call EV ‘predicted ROI’, which reminds me the importance of using their language and avoiding EA/philosophy language.
Next steps: I’ll look for an academic partner to help create a robust study.