The section on expected value theory seemed unfairly unsympathetic to TUA proponents
The question of what we should do with pascal’s mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature.
Alternatives to expected value have received far less attention in the literature and also have many problems
eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters.
So, I think framing it as “here is this gaping hole in this worldview” is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.
eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters.
You seem to assume that voting / engaging in political advocacy are all obviously important things to do and that any argument that says don’t bother doing them falls prey to a reductio ad absurdum, but it’s not clear to me why you think that.
If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal’s Mugging when doing them, then one might rationally decide not to do them.
Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I’d imagine voting would stop being a Pascal’s Mugging action and would be associated with a reasonably high probability of having a positive payoff.
One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail.
If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it’s only one sentence plus a citation):
For instance, a simple threshold or plausibility assessment could protect the field’s resources and attention from being directed towards highly improbable or fictional events.
Which alternatives to EV have what problems for what uses in what contexts?
Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data?
What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by “thought leaders” with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?
There is no snark in this comment, I am simply stating my views as clearly and unambiguously as possible.
I’d like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn’t normal.
The section on expected value theory seemed unfairly unsympathetic to TUA proponents
The question of what we should do with pascal’s mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature.
Alternatives to expected value have received far less attention in the literature and also have many problems
eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters.
So, I think framing it as “here is this gaping hole in this worldview” is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.
You seem to assume that voting / engaging in political advocacy are all obviously important things to do and that any argument that says don’t bother doing them falls prey to a reductio ad absurdum, but it’s not clear to me why you think that.
If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal’s Mugging when doing them, then one might rationally decide not to do them.
Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I’d imagine voting would stop being a Pascal’s Mugging action and would be associated with a reasonably high probability of having a positive payoff.
One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail.
If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it’s only one sentence plus a citation):
Which alternatives to EV have what problems for what uses in what contexts?
Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data?
What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by “thought leaders” with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?
Why is writing a sequence of snarky rhetorical questions preferable to just making counter-arguments?
The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?
There is no snark in this comment, I am simply stating my views as clearly and unambiguously as possible.
I’d like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn’t normal.