I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.
Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I’m also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).
All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense
I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.
It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I am confused about why RP is still planning to invest significant resources in global health and development… Maybe a significant fraction of RP’s team believes non-hedonic benefits to be a major factor?
I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it’s never rational within EV to act on these claims. I’m not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don’t apply to claims about how you should reason about EV itself, and maybe that’s right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It’s just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.
I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests.
To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely.
Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I’m also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs.
Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions:
Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation.
Action B. Prevent 2*N/p days of torture with probability p, i.e. prevent 2*N days of torture in expectation.
Fanatic EV maximisation would always support B, thus preventing N (= 2*N—N) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N).
A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value
I believe this is a very sensible approach. I recently commented that:
[...] I think it is often the case that people in EA circles are sensitive to the possibility of astronomical upside (e.g. 10^70 lives), but not to astronomically low chance of achieving that upside (e.g. 10^-60 chance of achieving 0 longterm existential risk). I explain this by a natural human tendency not to attribute super low probabilities for events whose mechanics we do not understand well (e.g. surviving the time of perils), such that e.g. people would attribute similar probabilities to a cosmic endowment of 10^50 and 10^70 lives. However, these may have super different probabilities for some distributions. For example, for a Pareto distribution (a power-law), the probability density of a given value is proportional to “value”^-(alpha + 1). So, for a tail index of alpha = 1, a value of 10^70 is 10^-40 (= 10^(-2*(70 − 50))) as likely as a value of 10^50. So intuitions that the probability of 10^50 value is similar to that of 10^70 value would be completely off.
One can counter my particular example above by arguing that a power law is a priori implausible, and that we should use a more uninformative prior like a loguniform distribution. However, I feel like the choice of the prior would be somewhat arbitrary. For example, the upper bound of the prior loguniform distribution would be hard to define, and would be the major driver of the overall expected value. I think we should proceed with caution if prioritisation is hinging on decently arbitrary choices informed by almost no empirical evidence.
So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle.
I don’t think this [“All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense”] is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.
It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice?
I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
Thanks for clarifying!
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work could also lead to more outside investment (of both money and talent) in the area down the line, but I assume you are already trying to account for this in your allocation.
In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases.
I wonder how much of GiveWell’s funding is plausibly influenceable. Open Phil has been one of its major funders, is arguably cause neutral, and is open to being influenced by Rethink, having at least partly funded (or was it ~fully funded?) Rethink’s moral weight project. From my point of view, if people at Rethink generally believe the best AW interventions increase welfare much more cost-effectively than GiveWell’s top charities, I would guess influencing Open Phil to spend less on GHD and more on AW would be a quite cost-effective endeavour.
One important reason I am less enthusiatic about GHD is that Iamconfused about whether saving/extending lives is beneficial/harmful. I recently commented that:
I think this [“[Rethink’s] cost-effectiveness models include only first-order effects of spending on each cause. It’s likely that there are interactions between causes and/or positive and negative externalities to spending on each intervention”] is an important point. The meat-eater problem may well imply that live-saving interventions are harmful. I estimated it reduces the cost-effectiveness of GiveWell’s top charities by 8.72 % based on the suffering linked to the current consumption of poultry in the countries targeted by GiveWell, adjusted upwards to include the suffering caused by other farmed animals. On the one hand, the cost-effectiveness reduction may be lower due to animals in low income countries generally having better lives than broilers in a reformed scenario. On the other, the cost-effectiveness reduction may be higher due to future increases in the consumption of farmed animals in the countries targeted by GiveWell. I estimated the suffering of farmed animals globally is 4.64 the happiness of humans globally, which suggests saving a random human life leads to a nearterm reduction in suffering.
Has the WIT considered analysing under which conditions saving lives is robustly good after accounting for effects on farmed animals? This would involve forecasting the consumption and conditions of farmed animals (e.g. in the countries targeted by GiveWell). Saving lives would tend to be better in countries whose peak and subsequent decline of the consumption of factory-farmed crayfish, crabs, lobsters, fish, chicken and shrimp happened sooner, or in countries which are predicted to have good conditions for these animals (which I guess account for most of the suffering of farmed animals).
Ideally, one would also account for effects on wild animals. I think these may well be the major driver of the changes in welfare caused by GiveWell’s top charities, but they are harder to analyse due to the huge undercainty involved in assessing the welfare of wild animals.
You said:
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
For reasons like the ones I described in my comment just above (section 4 of Maximal cluelessness has more), I actually think AW interventions, at least ones which mostly focus on improving the conditions of animals (as opposed to reducing consumption), are more robustly positive than x-risk or GHD interventions.
Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
Likewise. Looking forward to further work! By the way, it is possible to donate specifically to a single area of Rethink? If so, would the money flow across areas be negligible, such that one would not be donating in practice to Rethink’s overall budget?
Hey Vasco, thanks for the thoughtful reply.
I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.
Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I’m also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).
I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.
It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.
I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.
The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.
In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.
Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.
Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.
*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it’s never rational within EV to act on these claims. I’m not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don’t apply to claims about how you should reason about EV itself, and maybe that’s right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It’s just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.
Thanks for the reply, Marcus!
To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely.
Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions:
Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation.
Action B. Prevent 2*N/p days of torture with probability p, i.e. prevent 2*N days of torture in expectation.
Fanatic EV maximisation would always support B, thus preventing N (= 2*N—N) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N).
I believe this is a very sensible approach. I recently commented that:
So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle.
I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice?
Thanks for clarifying!
Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work could also lead to more outside investment (of both money and talent) in the area down the line, but I assume you are already trying to account for this in your allocation.
I wonder how much of GiveWell’s funding is plausibly influenceable. Open Phil has been one of its major funders, is arguably cause neutral, and is open to being influenced by Rethink, having at least partly funded (or was it ~fully funded?) Rethink’s moral weight project. From my point of view, if people at Rethink generally believe the best AW interventions increase welfare much more cost-effectively than GiveWell’s top charities, I would guess influencing Open Phil to spend less on GHD and more on AW would be a quite cost-effective endeavour.
One important reason I am less enthusiatic about GHD is that I am confused about whether saving/extending lives is beneficial/harmful. I recently commented that:
You said:
For reasons like the ones I described in my comment just above (section 4 of Maximal cluelessness has more), I actually think AW interventions, at least ones which mostly focus on improving the conditions of animals (as opposed to reducing consumption), are more robustly positive than x-risk or GHD interventions.
Likewise. Looking forward to further work! By the way, it is possible to donate specifically to a single area of Rethink? If so, would the money flow across areas be negligible, such that one would not be donating in practice to Rethink’s overall budget?