I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests.
To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely.
Also, the idea that fanaticism doesnât come up in practice doesnât seem quite right to me. On one level, yeah, Iâve not been approached by a wizard asking for my wallet and do not expect to be. But Iâm also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs.
Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions:
Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation.
Action B. Prevent 2*N/âp days of torture with probability p, i.e. prevent 2*N days of torture in expectation.
Fanatic EV maximisation would always support B, thus preventing N (= 2*NâN) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N).
A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value
I believe this is a very sensible approach. I recently commented that:
[...] I think it is often the case that people in EA circles are sensitive to the possibility of astronomical upside (e.g. 10^70 lives), but not to astronomically low chance of achieving that upside (e.g. 10^-60 chance of achieving 0 longterm existential risk). I explain this by a natural human tendency not to attribute super low probabilities for events whose mechanics we do not understand well (e.g. surviving the time of perils), such that e.g. people would attribute similar probabilities to a cosmic endowment of 10^50 and 10^70 lives. However, these may have super different probabilities for some distributions. For example, for a Pareto distribution (a power-law), the probability density of a given value is proportional to âvalueâ^-(alpha + 1). So, for a tail index of alpha = 1, a value of 10^70 is 10^-40 (= 10^(-2*(70 â 50))) as likely as a value of 10^50. So intuitions that the probability of 10^50 value is similar to that of 10^70 value would be completely off.
One can counter my particular example above by arguing that a power law is a priori implausible, and that we should use a more uninformative prior like a loguniform distribution. However, I feel like the choice of the prior would be somewhat arbitrary. For example, the upper bound of the prior loguniform distribution would be hard to define, and would be the major driver of the overall expected value. I think we should proceed with caution if prioritisation is hinging on decently arbitrary choices informed by almost no empirical evidence.
So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle.
I donât think this [âAll the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes senseâ] is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless itâs modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action thatâs 50.000001% positive in expectation even if the downside is similarly massive.
Itâs really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice?
Iâve not polled internally but I donât think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
Thanks for clarifying!
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work could also lead to more outside investment (of both money and talent) in the area down the line, but I assume you are already trying to account for this in your allocation.
In this particular comparison case of GHD and AW, thereâs hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isnât going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases.
I wonder how much of GiveWellâs funding is plausibly influenceable. Open Phil has been one of its major funders, is arguably cause neutral, and is open to being influenced by Rethink, having at least partly funded (or was it ~fully funded?) Rethinkâs moral weight project. From my point of view, if people at Rethink generally believe the best AW interventions increase welfare much more cost-effectively than GiveWellâs top charities, I would guess influencing Open Phil to spend less on GHD and more on AW would be a quite cost-effective endeavour.
One important reason I am less enthusiatic about GHD is that Iamconfused about whether saving/âextending lives is beneficial/âharmful. I recently commented that:
I think this [â[Rethinkâs] cost-effectiveness models include only first-order effects of spending on each cause. Itâs likely that there are interactions between causes and/âor positive and negative externalities to spending on each interventionâ] is an important point. The meat-eater problem may well imply that live-saving interventions are harmful. I estimated it reduces the cost-effectiveness of GiveWellâs top charities by 8.72 % based on the suffering linked to the current consumption of poultry in the countries targeted by GiveWell, adjusted upwards to include the suffering caused by other farmed animals. On the one hand, the cost-effectiveness reduction may be lower due to animals in low income countries generally having better lives than broilers in a reformed scenario. On the other, the cost-effectiveness reduction may be higher due to future increases in the consumption of farmed animals in the countries targeted by GiveWell. I estimated the suffering of farmed animals globally is 4.64 the happiness of humans globally, which suggests saving a random human life leads to a nearterm reduction in suffering.
Has the WIT considered analysing under which conditions saving lives is robustly good after accounting for effects on farmed animals? This would involve forecasting the consumption and conditions of farmed animals (e.g. in the countries targeted by GiveWell). Saving lives would tend to be better in countries whose peak and subsequent decline of the consumption of factory-farmed crayfish, crabs, lobsters, fish, chicken and shrimp happened sooner, or in countries which are predicted to have good conditions for these animals (which I guess account for most of the suffering of farmed animals).
Ideally, one would also account for effects on wild animals. I think these may well be the major driver of the changes in welfare caused by GiveWellâs top charities, but they are harder to analyse due to the huge undercainty involved in assessing the welfare of wild animals.
You said:
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
For reasons like the ones I described in my comment just above (section 4 of Maximal cluelessness has more), I actually think AW interventions, at least ones which mostly focus on improving the conditions of animals (as opposed to reducing consumption), are more robustly positive than x-risk or GHD interventions.
Ultimately, though, Iâm still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
Likewise. Looking forward to further work! By the way, it is possible to donate specifically to a single area of Rethink? If so, would the money flow across areas be negligible, such that one would not be donating in practice to Rethinkâs overall budget?
Thanks for the reply, Marcus!
To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely.
Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions:
Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation.
Action B. Prevent 2*N/âp days of torture with probability p, i.e. prevent 2*N days of torture in expectation.
Fanatic EV maximisation would always support B, thus preventing N (= 2*NâN) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N).
I believe this is a very sensible approach. I recently commented that:
So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle.
I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice?
Thanks for clarifying!
Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work could also lead to more outside investment (of both money and talent) in the area down the line, but I assume you are already trying to account for this in your allocation.
I wonder how much of GiveWellâs funding is plausibly influenceable. Open Phil has been one of its major funders, is arguably cause neutral, and is open to being influenced by Rethink, having at least partly funded (or was it ~fully funded?) Rethinkâs moral weight project. From my point of view, if people at Rethink generally believe the best AW interventions increase welfare much more cost-effectively than GiveWellâs top charities, I would guess influencing Open Phil to spend less on GHD and more on AW would be a quite cost-effective endeavour.
One important reason I am less enthusiatic about GHD is that I am confused about whether saving/âextending lives is beneficial/âharmful. I recently commented that:
You said:
For reasons like the ones I described in my comment just above (section 4 of Maximal cluelessness has more), I actually think AW interventions, at least ones which mostly focus on improving the conditions of animals (as opposed to reducing consumption), are more robustly positive than x-risk or GHD interventions.
Likewise. Looking forward to further work! By the way, it is possible to donate specifically to a single area of Rethink? If so, would the money flow across areas be negligible, such that one would not be donating in practice to Rethinkâs overall budget?