Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I’m likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that’s okay.
Some possible axes:
live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
safe bets vs. moonshots
suffering-focused vs. “classical”
short-term vs. far future
Although having all possible combinations just along these axes would require 16 funds so in practice this won’t work exactly as I’ve described.
I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).
mostly agree, but you need a couple more assumptions to make that work.
poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I’m not sure it is. See my old forum post
Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you’re just really sceptical about x-risks.
animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you’re suffering focused (i.e. unhappiness counts more than happiness)
If you’re a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by ‘short term’. You probably shouldn’t focus on animals. Why? Animal welfare reforms don’t benefit the presently existing animals, but the next generation of animals, who don’t count on presentism as they don’t presently exist.
Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).
Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I’m likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that’s okay.
Some possible axes:
live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
safe bets vs. moonshots
suffering-focused vs. “classical”
short-term vs. far future
Although having all possible combinations just along these axes would require 16 funds so in practice this won’t work exactly as I’ve described.
Great idea. This makes sense to me.
Yup! I’ve always seen ‘animals v poverty v xrisk’ not as three random areas, but three optimal areas given different philosophies:
poverty = only short term
animals = all conscious suffering matters + only short term
xrisk = long term matters
I’d be happy to see other philosophical positions considered.
mostly agree, but you need a couple more assumptions to make that work.
poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I’m not sure it is. See my old forum post
Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you’re just really sceptical about x-risks.
animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you’re suffering focused (i.e. unhappiness counts more than happiness)
If you’re a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by ‘short term’. You probably shouldn’t focus on animals. Why? Animal welfare reforms don’t benefit the presently existing animals, but the next generation of animals, who don’t count on presentism as they don’t presently exist.
Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).