Your ability and patience to follow Peters’ arguments is better than mine, but his insistence that the field of economics is broken because its standard Expected Utility Theory (axioms defined by von Neumann, who might just have thought about ergodicity a little...) neglected ergodic considerations reminds me of this XKCD. Economists don’t actually expect rational decision makers to exhibit zero risk aversion and take the bet, and Peters’ paper acknowledges the practical implementation of his ideas is the well-known Kelly criterion. And EA utilitarians are unusually fond of betting on stuff they believe is significantly +EV in prediction markets without giving away their entire bankroll each time (whether using the Kelly criterion or some other bet-sizing criteria), so they don’t need a paradigm shift to agree that if asked to bet the future of the human race on 51/49% doubling/doom game, the winning move is not to play.
afaik the only EA to have got “on the train to crazytown” to the point where he told interviewers that he’d definitely pick the 51% chance of doubling world happiness at the 49% risk of ending the world is SBF, and that niche approach to risk toleration isn’t unlinked to his rise and fall. (It’s perhaps a mild indictment of EA philosophers’ tolerance of “the train to crazytown” that he publicly advocated this as the EA perspective on utilitarianism without much pushback, but EA utilitarianism is more often criticised for having the exact opposite tendencies: longtermism being extremely risk averse. Not knowing what the distribution of future outcomes actually looks like is a much bigger problem than naive maximization)
Your ability and patience to follow Peters’ arguments is better than mine, but his insistence that the field of economics is broken because its standard Expected Utility Theory (axioms defined by von Neumann, who might just have thought about ergodicity a little...) neglected ergodic considerations reminds me of this XKCD. Economists don’t actually expect rational decision makers to exhibit zero risk aversion and take the bet, and Peters’ paper acknowledges the practical implementation of his ideas is the well-known Kelly criterion. And EA utilitarians are unusually fond of betting on stuff they believe is significantly +EV in prediction markets without giving away their entire bankroll each time (whether using the Kelly criterion or some other bet-sizing criteria), so they don’t need a paradigm shift to agree that if asked to bet the future of the human race on 51/49% doubling/doom game, the winning move is not to play.
afaik the only EA to have got “on the train to crazytown” to the point where he told interviewers that he’d definitely pick the 51% chance of doubling world happiness at the 49% risk of ending the world is SBF, and that niche approach to risk toleration isn’t unlinked to his rise and fall. (It’s perhaps a mild indictment of EA philosophers’ tolerance of “the train to crazytown” that he publicly advocated this as the EA perspective on utilitarianism without much pushback, but EA utilitarianism is more often criticised for having the exact opposite tendencies: longtermism being extremely risk averse. Not knowing what the distribution of future outcomes actually looks like is a much bigger problem than naive maximization)