How fungible do donations to anti-AI-xrisk charities tend to be with the broader pool of EA money?
For context: I tend to expect anti-AI-xrisk donations to be the highest-value ones I can make. My workplace offers $10,000 per year of donation-matching to a limited set of charities. None of these charities, from what I’ve managed to find during recent searches, particularly directly do anti-AI-xrisk work. However, off in other EA space, one of the charities they support matched donations to is Effective Ventures. This leaves me with a question: for my first $10,000 of donations next year, should I expect to get better value by doing a matched donation to Effective Ventures or by doing a non-matched donation to my anti-AI-xrisk charity of choice?
As far as I can tell, the answer here comes down to the sub-question of what the fungibility-patterns at play are. If I can reasonably expect that money I put into Effective Ventures will cause other donors to put money-they’d-otherwise-have-put-there into the anti-AI-xrisk field instead, I should do the matched donation to Effective Ventures; if I can’t reasonably expect that, then I should just donate to the anti-AI-xrisk field directly.
Have the patterns here been studied at all? I know GiveWell has made some attempts to incorporate that sort of fungibility effect into their own models, but haven’t seen it discussed in much detail outside of the GiveWell context.
A worry I have about your model is the conflation of ‘pain’ and ‘negative value of experience’.
On a superficial level: pain asymbolia exists, masochism exists, many people actively enjoy spicy food (and keep very-hot sauce around to put on their own food on a regular basis out of enjoyment, not just as a novelty), et cetera. There are people who don’t experience pain as negatively-valenced, or do so only in limited circumstances, and running those people’s pain together with more direct emotional-state-related concerns such as grieving or depression is going to lead to confused results.
But let’s grant that pain asymbolia and masochism and so forth are unusual and probably not direct factors in most people’s feelings about most pain. Still, their existence points at an important deeper truth: there are cognitive indirection-layers between sensory experiences and the values placed thereupon. And thus there’s room for the logarithm to be re-flattened to some extent—albeit not necessarily all the way—through those indirection-layers. It’s possible for someone to be in ten times as much pain-as-sensory-experience without experiencing ten times as much unpleasantness from it, even if they’re still experiencing more unpleasantness from it. And my default intuition is to expect that abstract structure to be far more widespread than its particularly-extreme instantiations might be.
(For one thing, the entire concept of the hedonic treadmill rests on that sort of indirection-layer existing—change in valenced response to a stimulus with repetition even as the stimulus holds constant—and as far as I know the hedonic treadmill is somewhere close-ish to a human universal. So it seems very unlikely that there’s any large fraction of the population for which such an indirection-layer doesn’t exist.)
Similar-but-milder concerns apply to conflation of ‘pleasure’ and ‘positive value of experience’; I suspect those might cause sexual experiences to be overweighted in your sample, because a common use of ‘pleasure’ is to refer, not to positively-valenced experience in the abstract, but rather to specific sorts of sensory experience commonly associated with sex, and then the same “there’s an indirection between the sensory experience and its emotional impact in which the logarithmic curve can be re-flattened to some extent” issue is likely to apply to those cases.