Vetocracy can be beneficial if a system’s strength depends on it not changing. e.g. People invest in Bitcoin because it’s incredibly difficult to change its monetary policy. Bitcoin doesn’t need to innovate.
But if Ethereum is too vetocratic and fails to innovate—it could get outcompeted by other more nimble startups like Solana or Avalanche.
The current mood in the AI Safety community appears to be pessimistic. For example, Eliezer bet Bryan Caplan (2-1 odds) that humans will be extinct by Jan 1, 2030.
If you believe that inaction will lead to extinction, reducing vetoes and increasing the variance of outcomes could increase the probability we’ll survive.
Healthy people are fragile (increased variance can mostly make them worse), very sick people are antifragile (increased variance can mostly make them better). So it is reasonable to give a terminal cancer patient an experimental drug—the worst that happens is they die (which would happen anyway) and the best that happens is they recover—it’s all upside and no downside.
The “bureaucrat’s curse” reminds me of Vitalik’s bulldozer vs vetocracy political axis: https://vitalik.ca/general/2021/12/19/bullveto.html
Vetocracy can be beneficial if a system’s strength depends on it not changing. e.g. People invest in Bitcoin because it’s incredibly difficult to change its monetary policy. Bitcoin doesn’t need to innovate.
But if Ethereum is too vetocratic and fails to innovate—it could get outcompeted by other more nimble startups like Solana or Avalanche.
The current mood in the AI Safety community appears to be pessimistic. For example, Eliezer bet Bryan Caplan (2-1 odds) that humans will be extinct by Jan 1, 2030.
If you believe that inaction will lead to extinction, reducing vetoes and increasing the variance of outcomes could increase the probability we’ll survive.
As Scott Alexander says,