Since it looks like you’re looking for an opinion, here’s mine:
To start, while I deeply respect GiveWell’s work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you’re planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way.
…Additionally, I don’t think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement call in EA cause prioritization right now is “neglected animals” (eg invertebrates, wild animals) versus AI risk reduction.
AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).
This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.
To be clear, AI orgs still do need money. I think there’s a vibe that all the AI organizations that can be funded by OpenPhil are fully funded and thus AI donations are not attractive to individual EA forum donors. This is not true. I agree that their highest priority parts are fully funded and thus the marginal cost-effectiveness of donations is reduced. But this marginal cost-effectiveness is not eliminated, and it still can be high. I think there are quite a few AI orgs that are still primarily limited by money and would do great things with more funding. Additionally it’s not healthy for these orgs to be so heavily reliant on OpenPhil support.
So my overall guess is if you think AI is only 10x or less important in the abstract than work on neglected animals, you should donate to the neglected animals due to this diminishing marginal returns issue.
I currently lean a bit towards AI is >10x neglected animals and therefore I want to donate to AI stuff, but I really don’t think this is settled, it needs more research, and it’s very reasonable to believe the other way.
~
Ok so where to donate? I don’t have a good systematic take in either the animal space or the AI space unfortunately, but here’s a shot:
For starters, in the AI space, a big issue for individual donors is that unfortunately it’s very hard to properly evaluate AI organizations without a large stack of private information that is hard to come by. This private info has greatly changed my view of what organizations are good in the AI space. On the other hand you can basically evaluate animal orgs well enough with only public info, and the private info only improves the eval a little bit.
Moreover, in the neglected animal space, I do basically trust the EA Animal Welfare Fund to allocate money well and think it could be hard for an individual to outperform that. Shrimp Welfare Project also looks compelling.
I think the LTFF is worth donating to but to be clear I don’t think the LTFF actually does all-considered work on the topic—they seem to have an important segment of expertise that seems neglected outside the LTFF, but they definitely don’t have the expertise to cover and evaluate everything. I do think the LTFF would be a worthy donation choice.
If I were making a recommendation I would concur with the recommend the three AI orgs in OpenPhil’s list: Horizon, ARI, and CLTR—they are all being recommended by individual OpenPhil staff for good reason.
There are several other orgs I think are worth considering as well and you may want to think about options that are only available to you as an individual, such as political donations. Or think about ways where OpenPhil may not be able to do as well in the AI space, like PauseAI or digital sentience work, both of which still look neglected.
~
A few caveats/exceptions to my above comment:
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year.
I’m not shilling for my own orgs in this comment to keep it less biased, but those are also options.
I don’t mean to be mean to GiveWell. Of course donating to GiveWell is very good and still better than 99.99% of charitable giving!
Another area I don’t consider but probably should is organizations like Giving What We Can that work somewhat outside these cause areas but may have sufficient multipliers that it still is very cost-effective. I think meta-work on top of global health and development work (such as improving its effectiveness or getting more people to like it / do it better) can often lead to larger multipliers since there’s magnitudes more underlying money in that area + interest in the first place.
I don’t appropriately focus on digital sentience, which OpenPhil is also not doing and could also use some help. I think this could be fairly neglected. Work that aims to get AI companies to commit towards not committing animal mistreatment is also an interesting and incredibly underexplored area that I don’t know much about.
There’s a sizable amount of meta-strategic disagreement / uncertainty within the AI space that I gloss over here (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements with his conclusions).
I do think risk aversion is underrated as a reasonable donor attitude that can vary between donors and does make the case for focusing on neglected animals stronger. I don’t think there’s an accurate and objective answer about how risk averse you ought to be.
I agree with this comment. Thanks for this clear overview.
The only element where I might differ is whether AI really is >10x neglected animals.
My main issue is that while AI is a very important topic, it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. First, it’s hard to know what will work and what won’t accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).
My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff).
If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook.
But since I’m rather risk averse, I devote most of my resources to neglected animals.
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.
I also agree it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there’s higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here.
I also agree that “work that aims to get AI companies to commit towards not committing animal mistreatment” is an interesting and incredibly underexplored area. I think this is likely worth funding if you’re knowledgable about the space (I’m not) and know of good opportunities (I currently don’t).
I think it’s normal, and even good that the EA community doesn’t have a clear prioritization of where to donate. People have different values and different beliefs, and so prioritize donations to different projects.
It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
What do you mean? I don’t understand how animal welfare campaigns interact with x-risks, except for reducing the risk of future pandemics, but I don’t think that’s what you had in mind (and even then, I don’t think those are the kinds of pandemics that x-risk minded people worry about)
I don’t know what the general consensus on the most impactful x-risk funding opportunities are
It seems clear to me that there is no general consensus, and some of the most vocal groups are actively fighting against each other.
I don’t really know what orgs do all-considered work on this topic. I guess the LTFF?
You can see Giving What We Can recommendations for global catrastrophic risk reduction on this page[1] (i.e. there’s also Longview’s Emerging Challenges Fund). Many other orgs and foundations work on x-risk reduction, e.g. Open Philanthropy.
I am more confused/inattentive and this community is covering a larger set of possible choices so it’s harder to track what consensus is
I think that if there were consensus that a single project was obviously the best, we would all have funded it already, unless it was able to productively use very very high amounts of money (e.g., cash transfers)
I note that in some sense I have lost trust that the EA community gives me a clear prioritisation of where to donate.
Some clearer statements:
I still think GiveWell does great work
I still generally respect the funding decisions of Open Philanthropy
I still think this forum has a higher standard than most place
It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
I don’t know what the general consensus on the most impactful x-risk funding opportunities are
I don’t really know what orgs do all-considered work on this topic. I guess the LTFF?
I am more confused/inattentive and this community is covering a larger set of possible choices so it’s harder to track what consensus is
Since it looks like you’re looking for an opinion, here’s mine:
To start, while I deeply respect GiveWell’s work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you’re planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. …Additionally, I don’t think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement call in EA cause prioritization right now is “neglected animals” (eg invertebrates, wild animals) versus AI risk reduction.
AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).
This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.
To be clear, AI orgs still do need money. I think there’s a vibe that all the AI organizations that can be funded by OpenPhil are fully funded and thus AI donations are not attractive to individual EA forum donors. This is not true. I agree that their highest priority parts are fully funded and thus the marginal cost-effectiveness of donations is reduced. But this marginal cost-effectiveness is not eliminated, and it still can be high. I think there are quite a few AI orgs that are still primarily limited by money and would do great things with more funding. Additionally it’s not healthy for these orgs to be so heavily reliant on OpenPhil support.
So my overall guess is if you think AI is only 10x or less important in the abstract than work on neglected animals, you should donate to the neglected animals due to this diminishing marginal returns issue.
I currently lean a bit towards AI is >10x neglected animals and therefore I want to donate to AI stuff, but I really don’t think this is settled, it needs more research, and it’s very reasonable to believe the other way.
~
Ok so where to donate? I don’t have a good systematic take in either the animal space or the AI space unfortunately, but here’s a shot:
For starters, in the AI space, a big issue for individual donors is that unfortunately it’s very hard to properly evaluate AI organizations without a large stack of private information that is hard to come by. This private info has greatly changed my view of what organizations are good in the AI space. On the other hand you can basically evaluate animal orgs well enough with only public info, and the private info only improves the eval a little bit.
Moreover, in the neglected animal space, I do basically trust the EA Animal Welfare Fund to allocate money well and think it could be hard for an individual to outperform that. Shrimp Welfare Project also looks compelling.
I think the LTFF is worth donating to but to be clear I don’t think the LTFF actually does all-considered work on the topic—they seem to have an important segment of expertise that seems neglected outside the LTFF, but they definitely don’t have the expertise to cover and evaluate everything. I do think the LTFF would be a worthy donation choice.
If I were making a recommendation I would concur with the recommend the three AI orgs in OpenPhil’s list: Horizon, ARI, and CLTR—they are all being recommended by individual OpenPhil staff for good reason.
There are several other orgs I think are worth considering as well and you may want to think about options that are only available to you as an individual, such as political donations. Or think about ways where OpenPhil may not be able to do as well in the AI space, like PauseAI or digital sentience work, both of which still look neglected.
~
A few caveats/exceptions to my above comment:
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year.
I’m not shilling for my own orgs in this comment to keep it less biased, but those are also options.
I don’t mean to be mean to GiveWell. Of course donating to GiveWell is very good and still better than 99.99% of charitable giving!
Another area I don’t consider but probably should is organizations like Giving What We Can that work somewhat outside these cause areas but may have sufficient multipliers that it still is very cost-effective. I think meta-work on top of global health and development work (such as improving its effectiveness or getting more people to like it / do it better) can often lead to larger multipliers since there’s magnitudes more underlying money in that area + interest in the first place.
I don’t appropriately focus on digital sentience, which OpenPhil is also not doing and could also use some help. I think this could be fairly neglected. Work that aims to get AI companies to commit towards not committing animal mistreatment is also an interesting and incredibly underexplored area that I don’t know much about.
There’s a sizable amount of meta-strategic disagreement / uncertainty within the AI space that I gloss over here (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements with his conclusions).
I do think risk aversion is underrated as a reasonable donor attitude that can vary between donors and does make the case for focusing on neglected animals stronger. I don’t think there’s an accurate and objective answer about how risk averse you ought to be.
I agree with this comment. Thanks for this clear overview.
The only element where I might differ is whether AI really is >10x neglected animals.
My main issue is that while AI is a very important topic, it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact.
First, it’s hard to know what will work and what won’t accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).
My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff).
If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook.
But since I’m rather risk averse, I devote most of my resources to neglected animals.
I’m very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.
I also agree it’s very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there’s higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here.
I also agree that “work that aims to get AI companies to commit towards not committing animal mistreatment” is an interesting and incredibly underexplored area. I think this is likely worth funding if you’re knowledgable about the space (I’m not) and know of good opportunities (I currently don’t).
I do think risk aversion is underrated as a reasonable donor attitude and does make the case for focusing on neglected animals stronger.
I think it’s normal, and even good that the EA community doesn’t have a clear prioritization of where to donate. People have different values and different beliefs, and so prioritize donations to different projects.
What do you mean? I don’t understand how animal welfare campaigns interact with x-risks, except for reducing the risk of future pandemics, but I don’t think that’s what you had in mind (and even then, I don’t think those are the kinds of pandemics that x-risk minded people worry about)
It seems clear to me that there is no general consensus, and some of the most vocal groups are actively fighting against each other.
You can see Giving What We Can recommendations for global catrastrophic risk reduction on this page[1] (i.e. there’s also Longview’s Emerging Challenges Fund). Many other orgs and foundations work on x-risk reduction, e.g. Open Philanthropy.
I think that if there were consensus that a single project was obviously the best, we would all have funded it already, unless it was able to productively use very very high amounts of money (e.g., cash transfers)
Disclaimer: I work at GWWC
do you feel confident about your moral philosophy?