I have increasingly become open to incorporating alternative decision theories as I recognize that I cannot be entirely certain in expected value approaches, which means that (per expected value!) I probably should not solely rely on one approach. At the same time, I am still not convinced that there is a clear, good alternative, and I also repeatedly find that the arguments against using EV are not compelling (e.g., due to ignoring more sophisticated ways of applying EV).
Having grappled with the problem of EV-fanaticism for a long time in part due to the wild norms of competitive policy debate (e.g., here, here, and here), I’ve thought a lot about this, and I’ve written many comments on the forum about this. My expectation is that this comment won’t gain sufficient attention/interest to warrant me going through and collecting all of those instances, but my short summary is something like:
Fight EV fire with EV fire: Countervailing outcomes—e.g., the risk that doing X has a negative 999999999… effect—are extremely important when dealing with highly speculative estimates. Sure, someone could argue that if you don’t give $20 to the random guy wearing a tinfoil hat and holding a remote which he will use to destroy 3^3^3 galaxies there’s at least a 0.000000...00001% chance he’s telling the truth, but there’s also a decent chance that doing this could have the opposite effect due to some (perhaps hard-to-identify) alternative effect.
One should probably distinguish between extremely low (e.g., 0.00001%) estimates which are the result of well-understood or “”objective”″[1] analyses which you expect cannot be improved by further analysis or information collection (e.g., you can directly see/show the probability written in a computer program, a series of coin flips with a fair coin) vs. such estimates that are the result of very subjective estimates probability estimates that you expect you will likely adjust downwards due to further analysis, but where you just can’t immediately rule out some sliver of uncertainty.[2]
Often you should recognize that when you get into small probability spaces for “”subjective”″ questions, you are at a very high risk of being swayed by random noise or deliberate bias in argument/information selection—for example, if you’ve never thought about how nano-tech could cause extinction and listen to someone who gives you a sample of arguments/information in favor of the risks, you likely will not immediately know the counterarguments and you should update downwards based on the expectation that the sample you are exposed to is probably an exaggeration of the underlying evidence.
The cognitive/time costs of doing “”subjective”″ analyses likely imposes high opportunity costs (going back to the first point);
When your analysis is not legible to other people, you risk high reputational costs (again, which goes back to the first point).
Based on the above, I agree that in some cases it may be a far more efficient heuristic for decision-making under analytical constraints to use heuristics like trimming off highly-”“subjective”” risk estimates. However, I make this claim based on EV with the recognition that it is still a better general-purpose decision-making algorithm, but which may just not be optimized for application under realistic constraints (e.g., other people not being familiar with your method of thinking, short amount of time for discussion or research, error-prone brains which do not reliably handle lots of considerations and small numbers).[3]
I dislike using “objective” and “subjective” to make these distinctions, but for simplicity’s sake / for lack of a better alternative at the moment, I will use them.
I advocate for something like this competitive policy debate, since “fighting EV fire with EV fire” risks “burning the discussion”—including the educational value, reputation of participants, etc. But most deliberations do not have to be made within the artificial constraints of competitive policy debate.
I have increasingly become open to incorporating alternative decision theories as I recognize that I cannot be entirely certain in expected value approaches, which means that (per expected value!) I probably should not solely rely on one approach. At the same time, I am still not convinced that there is a clear, good alternative, and I also repeatedly find that the arguments against using EV are not compelling (e.g., due to ignoring more sophisticated ways of applying EV).
Having grappled with the problem of EV-fanaticism for a long time in part due to the wild norms of competitive policy debate (e.g., here, here, and here), I’ve thought a lot about this, and I’ve written many comments on the forum about this. My expectation is that this comment won’t gain sufficient attention/interest to warrant me going through and collecting all of those instances, but my short summary is something like:
Fight EV fire with EV fire: Countervailing outcomes—e.g., the risk that doing X has a negative 999999999… effect—are extremely important when dealing with highly speculative estimates. Sure, someone could argue that if you don’t give $20 to the random guy wearing a tinfoil hat and holding a remote which he will use to destroy 3^3^3 galaxies there’s at least a 0.000000...00001% chance he’s telling the truth, but there’s also a decent chance that doing this could have the opposite effect due to some (perhaps hard-to-identify) alternative effect.
One should probably distinguish between extremely low (e.g., 0.00001%) estimates which are the result of well-understood or “”objective”″[1] analyses which you expect cannot be improved by further analysis or information collection (e.g., you can directly see/show the probability written in a computer program, a series of coin flips with a fair coin) vs. such estimates that are the result of very subjective estimates probability estimates that you expect you will likely adjust downwards due to further analysis, but where you just can’t immediately rule out some sliver of uncertainty.[2]
Often you should recognize that when you get into small probability spaces for “”subjective”″ questions, you are at a very high risk of being swayed by random noise or deliberate bias in argument/information selection—for example, if you’ve never thought about how nano-tech could cause extinction and listen to someone who gives you a sample of arguments/information in favor of the risks, you likely will not immediately know the counterarguments and you should update downwards based on the expectation that the sample you are exposed to is probably an exaggeration of the underlying evidence.
The cognitive/time costs of doing “”subjective”″ analyses likely imposes high opportunity costs (going back to the first point);
When your analysis is not legible to other people, you risk high reputational costs (again, which goes back to the first point).
Based on the above, I agree that in some cases it may be a far more efficient heuristic for decision-making under analytical constraints to use heuristics like trimming off highly-”“subjective”” risk estimates. However, I make this claim based on EV with the recognition that it is still a better general-purpose decision-making algorithm, but which may just not be optimized for application under realistic constraints (e.g., other people not being familiar with your method of thinking, short amount of time for discussion or research, error-prone brains which do not reliably handle lots of considerations and small numbers).[3]
I dislike using “objective” and “subjective” to make these distinctions, but for simplicity’s sake / for lack of a better alternative at the moment, I will use them.
For more on this thread, see here: https://forum.effectivealtruism.org/posts/WSqLHsuNGoveGXhgz/disentangling-some-important-forecasting-concepts-terms
I advocate for something like this competitive policy debate, since “fighting EV fire with EV fire” risks “burning the discussion”—including the educational value, reputation of participants, etc. But most deliberations do not have to be made within the artificial constraints of competitive policy debate.