Thanks for this Kerry, very much appreciate the update.
Three funds I’d like to see:
The ‘life-improving’ or ‘quality of life’-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I’d be very enthusiastic to help whoever the fund manager was.
A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don’t take systemic change seriously) another part would be that I’d really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.
A ‘moonshots’ fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.
My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA’s openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn’t have been donating to, say, the health fund already. They’d probably be supporting green charities instead.
Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I’m likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that’s okay.
Some possible axes:
live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
safe bets vs. moonshots
suffering-focused vs. “classical”
short-term vs. far future
Although having all possible combinations just along these axes would require 16 funds so in practice this won’t work exactly as I’ve described.
I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).
mostly agree, but you need a couple more assumptions to make that work.
poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I’m not sure it is. See my old forum post
Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you’re just really sceptical about x-risks.
animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you’re suffering focused (i.e. unhappiness counts more than happiness)
If you’re a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by ‘short term’. You probably shouldn’t focus on animals. Why? Animal welfare reforms don’t benefit the presently existing animals, but the next generation of animals, who don’t count on presentism as they don’t presently exist.
Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).
Hey Michael, great ideas. I’d like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?
Hello Kerry. Building on what Michael Dickens said, I now think the funds need to be more tightly specified before we can pick the most promising recipients within each. For instance, imagine we have a ‘systemic change’ fund, presumably a totalist systemic change fund would be different from a person-affecting, life-improving one. It’s possible they might consider the same things top targets, but more work would be required to show that.
Narrowing down then:
Suppose we had life-improving fund using safe bets. I think charities like Strong Minds and Basic Needs (mental health orgs) are good contenders, although I can’t comment on their organisational efficiency.
Suppose we have a life-improving fund doing systemic change. I assume this would be trying to bring about political change via government policies, either at the domestic or international level. I can think of a few areas that look good, such as mental health policy, increasing access to pain relief in developing countries, and international drug policy reform. However, I can’t name and exalt particular orgs as I haven’t narrowed down to what I think the most promising sub-causes are yet.
Suppose we had a life-improving moonshots fund. If this is going to be different the one above, I imagine this would be looking for start ups, maybe a bit like EA Ventures did. I can’t think of anything relevant to suggest here apart from the start up I work on (the quality of which I can’t hope to be objective about). Perhaps this fund could be looking at starting new charities too, rather than looking to fund existing ones.
I don’t think not knowing who you’d give money to in advance is a reason not to pursue this further. For instance, I would consider donating to some type of moonshots fund precisely because I had no idea where the money would go and I’d like to see someone (else) try to figure it out. Once they’d made their we could build on their analysis and learn stuff.
Thanks for this Kerry, very much appreciate the update.
Three funds I’d like to see:
The ‘life-improving’ or ‘quality of life’-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I’d be very enthusiastic to help whoever the fund manager was.
A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don’t take systemic change seriously) another part would be that I’d really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.
A ‘moonshots’ fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.
My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA’s openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn’t have been donating to, say, the health fund already. They’d probably be supporting green charities instead.
Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a “safe, guaranteed to help” fund and a “moonshot” fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I’m likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that’s okay.
Some possible axes:
live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
safe bets vs. moonshots
suffering-focused vs. “classical”
short-term vs. far future
Although having all possible combinations just along these axes would require 16 funds so in practice this won’t work exactly as I’ve described.
Great idea. This makes sense to me.
Yup! I’ve always seen ‘animals v poverty v xrisk’ not as three random areas, but three optimal areas given different philosophies:
poverty = only short term
animals = all conscious suffering matters + only short term
xrisk = long term matters
I’d be happy to see other philosophical positions considered.
mostly agree, but you need a couple more assumptions to make that work.
poverty = person affecting view of population ethics or pure time discounting + belief poverty relief is the best way to increase well-being (I’m not sure it is. See my old forum post
Also, you could split poverty (things like Give Directly) from global health (AMF, SCI, etc.). You probably need a person-affecting view or pure time discounting if you support health over x-risk, unless you’re just really sceptical about x-risks.
animals = I think animals are only a priority if you believe in a impersonal population ethic like totalism (maximise happiness over history of the universe, hence creating happy life is good), and you either do pure time discounting or you’re suffering focused (i.e. unhappiness counts more than happiness)
If you’re a straightforward presentist (a person-affecting population ethic on which only presently existing things count), which is what you might mean by ‘short term’. You probably shouldn’t focus on animals. Why? Animal welfare reforms don’t benefit the presently existing animals, but the next generation of animals, who don’t count on presentism as they don’t presently exist.
Good point on the axes. I think we would, in practice, get less than 16 funds for a couple of reasons.
It’s hard to see how some funds would, in practice, differ. For instance, is AI safety a moonshot or a safe bet if we’re thinking about the future?
The life-saving vs life-improving point only seems relevant if you’ve already signed up to a person-affecting view. Talking about ‘saving lives’ of people in the far future is a bit strange (although you could distinguish between a far future fund that tried to reduce X-risk vs one that invested in ways to make future people happier, such as genetic engineering).
Hey Michael, great ideas. I’d like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?
Hello Kerry. Building on what Michael Dickens said, I now think the funds need to be more tightly specified before we can pick the most promising recipients within each. For instance, imagine we have a ‘systemic change’ fund, presumably a totalist systemic change fund would be different from a person-affecting, life-improving one. It’s possible they might consider the same things top targets, but more work would be required to show that.
Narrowing down then:
Suppose we had life-improving fund using safe bets. I think charities like Strong Minds and Basic Needs (mental health orgs) are good contenders, although I can’t comment on their organisational efficiency.
Suppose we have a life-improving fund doing systemic change. I assume this would be trying to bring about political change via government policies, either at the domestic or international level. I can think of a few areas that look good, such as mental health policy, increasing access to pain relief in developing countries, and international drug policy reform. However, I can’t name and exalt particular orgs as I haven’t narrowed down to what I think the most promising sub-causes are yet.
Suppose we had a life-improving moonshots fund. If this is going to be different the one above, I imagine this would be looking for start ups, maybe a bit like EA Ventures did. I can’t think of anything relevant to suggest here apart from the start up I work on (the quality of which I can’t hope to be objective about). Perhaps this fund could be looking at starting new charities too, rather than looking to fund existing ones.
I don’t think not knowing who you’d give money to in advance is a reason not to pursue this further. For instance, I would consider donating to some type of moonshots fund precisely because I had no idea where the money would go and I’d like to see someone (else) try to figure it out. Once they’d made their we could build on their analysis and learn stuff.