I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.
In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI risk group that seems to have important work and would be able to spend more marginal money now.
As Co-CEO of RP, I am obligated to say that our AI Governance and Strategy Department is doing this work and is actively seeking funding. Our work on Existential Security and doing surveys is also very focused on AI and are also funding constrained. You can donate to RP here.
…but given that you asked me specifically for non-RP work here is my ranked list of remaining organizations:
Centre for Long-Term Resilience (CLTR) does excellent work and appears to me to be exceptionally well-positioned and well-connected to meet the large UK AI policy window, especially with the UK AI Summit. My understanding is that they have funding gaps and that marginal funding could encourage CLTR to hire more AI-focused staff than they otherwise would and thus hopefully make more and faster progress. Donate here.
The Future Society (TFS) does excellent work on EU AI policy and in particular the EU AI Act. TFS is not an EA organization but that doesn’t mean they don’t do good work and in my personal opinion I think they are really underrated among the EA-affiliated AI governance community. They historically have not focused on fundraising as much as I think they should and they seem to now have a sizable funding gap that I think they could execute well on. Donate here.
Centre for Governance of AI (GovAI) is also doing great work on AI policy in both the US and UK and is very well-positioned. I think it’s plausible that they do the best work of any AI policy organization out there (including RP) – the reason I rank them third is mainly because I’m skeptical of the size of their funding gap and their plans for using marginal money. Donate here.
The Humane League(THL) is my honorable mention. I view AI risk mitigation work as more pressing than animal welfare work right now, but I still care a lot about the plight of animals and so I still support THL. They have a sizable funding gap, execute very competently, and do great work. I think the moral weight work that Rethink Priorities did made some in EA think that shrimp or insect work is more cost-effective than the kind of work that THL does but I don’t actually think that’s true insofar as readily available donation opportunities exist, though I’m unsure what other RP staff think and of course RP does research on shrimp and insects that we think is cost-effective. Donate here.
Here are my caveats for the above list:
These are my own personal opinions. Your view may differ from mine even if you agree with me on the relevant facts due to differing values, differing risk tolerances, etc.
I haven’t thought about this that much and I’m answering off-the-cuff for an AMA. This is definitely very subject to change as my own opinions change. I view my opinions on donation targets to be unusually in flux right now.
Statements I have made about these organizations are my own opinions and have not been run by representatives of those organizations. Therefore, I may have misrepresented them.
I don’t know details about room for more funding for these organizations which could change their prioritization in important ways.
Note the question of where to donate as an individual and what to encourage RP to do as co-CEO of Rethink Priorities are very different, so this list shouldn’t necessarily be taken as indications of RP’s priorities.
I focused on concrete “endpoint grants” but you may find more value in trusting a grant recommender and giving them money, such as via the Long-Term Future Fund, Manifund, Nonlinear, etc.
I also value giving to political candidates and could view this as plausibly better than some options on my list above but due to US 501c3 law, I don’t want to solicit donations to such candidates.
I know very little about the technical alignment landscape so perhaps there are good AI risk mitigation efforts there to support that would beat the options I recommend.
A lot of information I am relying on to create this list is confidential and there’s also likely a lot of additional confidential information I don’t know about that could change my list.
In honor of this question and to put some skin in the game behind these recommendations, I have given each of the four organizations I listed $1000.
Aside from RP, what is your best guess for the org that is morally best to give money to?
I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.
In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI risk group that seems to have important work and would be able to spend more marginal money now.
As Co-CEO of RP, I am obligated to say that our AI Governance and Strategy Department is doing this work and is actively seeking funding. Our work on Existential Security and doing surveys is also very focused on AI and are also funding constrained. You can donate to RP here.
…but given that you asked me specifically for non-RP work here is my ranked list of remaining organizations:
Centre for Long-Term Resilience (CLTR) does excellent work and appears to me to be exceptionally well-positioned and well-connected to meet the large UK AI policy window, especially with the UK AI Summit. My understanding is that they have funding gaps and that marginal funding could encourage CLTR to hire more AI-focused staff than they otherwise would and thus hopefully make more and faster progress. Donate here.
The Future Society (TFS) does excellent work on EU AI policy and in particular the EU AI Act. TFS is not an EA organization but that doesn’t mean they don’t do good work and in my personal opinion I think they are really underrated among the EA-affiliated AI governance community. They historically have not focused on fundraising as much as I think they should and they seem to now have a sizable funding gap that I think they could execute well on. Donate here.
Centre for Governance of AI (GovAI) is also doing great work on AI policy in both the US and UK and is very well-positioned. I think it’s plausible that they do the best work of any AI policy organization out there (including RP) – the reason I rank them third is mainly because I’m skeptical of the size of their funding gap and their plans for using marginal money. Donate here.
The Humane League (THL) is my honorable mention. I view AI risk mitigation work as more pressing than animal welfare work right now, but I still care a lot about the plight of animals and so I still support THL. They have a sizable funding gap, execute very competently, and do great work. I think the moral weight work that Rethink Priorities did made some in EA think that shrimp or insect work is more cost-effective than the kind of work that THL does but I don’t actually think that’s true insofar as readily available donation opportunities exist, though I’m unsure what other RP staff think and of course RP does research on shrimp and insects that we think is cost-effective. Donate here.
Here are my caveats for the above list:
These are my own personal opinions. Your view may differ from mine even if you agree with me on the relevant facts due to differing values, differing risk tolerances, etc.
I haven’t thought about this that much and I’m answering off-the-cuff for an AMA. This is definitely very subject to change as my own opinions change. I view my opinions on donation targets to be unusually in flux right now.
Statements I have made about these organizations are my own opinions and have not been run by representatives of those organizations. Therefore, I may have misrepresented them.
I don’t know details about room for more funding for these organizations which could change their prioritization in important ways.
Note the question of where to donate as an individual and what to encourage RP to do as co-CEO of Rethink Priorities are very different, so this list shouldn’t necessarily be taken as indications of RP’s priorities.
I focused on concrete “endpoint grants” but you may find more value in trusting a grant recommender and giving them money, such as via the Long-Term Future Fund, Manifund, Nonlinear, etc.
I also value giving to political candidates and could view this as plausibly better than some options on my list above but due to US 501c3 law, I don’t want to solicit donations to such candidates.
I know very little about the technical alignment landscape so perhaps there are good AI risk mitigation efforts there to support that would beat the options I recommend.
A lot of information I am relying on to create this list is confidential and there’s also likely a lot of additional confidential information I don’t know about that could change my list.
In honor of this question and to put some skin in the game behind these recommendations, I have given each of the four organizations I listed $1000.