Thanks Carl, it’s good to know that there are RFMF opportunities in topping up AI grants.
My reasoning for not donating to AI projects right now is based much less on a RFMF argument and more on not knowing enough about the space. I think I know enough about opportunities in global poverty, animal welfare, and EA community building to recommend projects there with confidence, but not for AI. I expect it would take me a good deal of time to develop the relevant expertise in AI to consider it properly. I have thought about working to develop that expertise, but so far I have not prioritized doing so.
I don’t understand how that logic leads to thinking it’s a good idea to donate to the causes you’re thinking of donating to. Donating to a cause area because you can identify good projects within it seems like the streetlight effect.
If you think that AI stuff is plausibly better, shouldn’t you either want to learn more about it or enter a donor lottery so that it’s more cost-effective for you to learn about it?
1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don’t think it’s so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.
2a.) I think I’m basically okay with the streetlight effect. I think there’s a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there’s less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being “better than average”. I think my recommendations here accomplish that.
2b.) Insofar as my reasoning in (2a) is some “streetlight effect” bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.
3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.
Although I laud posts like the OP, I’m not sure I understand this approach to uncertainty.
I think a lot turns on what you mean by the AI cause area being “Plausibly better” than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: “If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?”
If the answer is “plausibly so, but probably not” (either due to a lower ‘prima facie’ central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can’t carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.
Yet if the answer is “Probably, yes”, then offering these recommendations simpliciter (i.e. “EA should fully fund this”) seems premature to me. The evaluation is valuable, but should be presented with caveats like, “Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don’t know what to fund within it).” It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.
If the answer is “plausibly so, but probably not” (either due to a lower ‘prima facie’ central estimate, or after pricing in regression to the mean etc.)
An additional point to take into account when it comes to examining the research on AI as possible space for donations: as a scientific domain the topic of AI risks and safety can easily fall under the public/academic funding, even under the assumption that it is currently underfunded. To this end, individual applicants (precisely those who would be conducting research by means of donations) can apply for individual PhD and postdoc grants. There are numerous opportunities of that kind across EU. Moreover, the funding agencies (e.g. in Germany, Belgium, Netherlands, etc.) will employ expert refereeing system (sometimes even asking the applicant to suggest suitable referees) to assess the project and its effectiveness (which I find very relevant from the perspective of EA). If we take this into account, then a number of other organizations that can’t be so easily funded via already existing institutional channels becomes much more urgent.
: one can’t carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.
This is why EA rarely falls into what can accurately be described as a “streetlight effect”. We aren’t looking for one set of keys, we’re looking for a bunch of keys (threats to human welfare) and theres a bunch of us drunkards, all with differing abilities and expertise. So I’d argue if its dark somewhere, those with the expertise need to start building streetlights, but if the lights getting brighter in certain areas (RCTs in health) then we need people there too.
Thanks Carl, it’s good to know that there are RFMF opportunities in topping up AI grants.
My reasoning for not donating to AI projects right now is based much less on a RFMF argument and more on not knowing enough about the space. I think I know enough about opportunities in global poverty, animal welfare, and EA community building to recommend projects there with confidence, but not for AI. I expect it would take me a good deal of time to develop the relevant expertise in AI to consider it properly. I have thought about working to develop that expertise, but so far I have not prioritized doing so.
I don’t understand how that logic leads to thinking it’s a good idea to donate to the causes you’re thinking of donating to. Donating to a cause area because you can identify good projects within it seems like the streetlight effect.
If you think that AI stuff is plausibly better, shouldn’t you either want to learn more about it or enter a donor lottery so that it’s more cost-effective for you to learn about it?
My excuses in order of importance:
1.) While I do think AI as a cause area could be plausibly better than global poverty or animal welfare, I don’t think it’s so plausibly better that the expected value given my uncertainty dwarfs my current recommendations.
2a.) I think I’m basically okay with the streetlight effect. I think there’s a lot of benefit in donating now to support groups that might not be able to expand at all without my donation, which is what the criteria I outlined here accomplish. Given the entire EA community collaborating as a whole, I think there’s less need for me to focus tons of time on making sure my donations are as cost-effective as possible, and more just a need to clear a bar of being “better than average”. I think my recommendations here accomplish that.
2b.) Insofar as my reasoning in (2a) is some “streetlight effect” bias, I think you could accuse nearly anyone of this, since very few have thoroughly explored every cause area and no one could fully rule out being wrong about a cause area.
3.) There is still more I could donate later. This money is being saved mainly as a hedge to large financial uncertainty in my immediate future, but could also be used as savings to donate later when I learn more.
[Note: I work on existential risk reduction]
Although I laud posts like the OP, I’m not sure I understand this approach to uncertainty.
I think a lot turns on what you mean by the AI cause area being “Plausibly better” than global poverty or animal welfare on EV. The Gretchenfrage seems to be this conditional forecast: “If I spent (lets say) 6 months looking at the AI cause area, would I expect to identify better uses of marginal funding in this cause area than those I find in animal welfare and global poverty?”
If the answer is “plausibly so, but probably not” (either due to a lower ‘prima facie’ central estimate, or after pricing in regression to the mean etc.), then I understand the work uncertainty is doing here (modulo the usual points about VoI): one can’t carefully look at everything, and one has to make some judgments on what cause areas look most promising to investigate on current margins.
Yet if the answer is “Probably, yes”, then offering these recommendations simpliciter (i.e. “EA should fully fund this”) seems premature to me. The evaluation is valuable, but should be presented with caveats like, “Conditional on thinking global poverty is the best cause area, fund X; conditional on thinking animal welfare is the best cause area, fund Y (but, FWIW, I believe AI is the best cause area, but I don’t know what to fund within it).” It would also lean against making ones own donations to X, Y etc., rather than spending time thinking about it/following the recommendations of someone one trusts to make good picks in the AI cause area.
This is what captures my views best right now.
An additional point to take into account when it comes to examining the research on AI as possible space for donations: as a scientific domain the topic of AI risks and safety can easily fall under the public/academic funding, even under the assumption that it is currently underfunded. To this end, individual applicants (precisely those who would be conducting research by means of donations) can apply for individual PhD and postdoc grants. There are numerous opportunities of that kind across EU. Moreover, the funding agencies (e.g. in Germany, Belgium, Netherlands, etc.) will employ expert refereeing system (sometimes even asking the applicant to suggest suitable referees) to assess the project and its effectiveness (which I find very relevant from the perspective of EA). If we take this into account, then a number of other organizations that can’t be so easily funded via already existing institutional channels becomes much more urgent.
P.S. Great post, Peter, only now saw it.
To attempt to complement what Peter already said,
This is why EA rarely falls into what can accurately be described as a “streetlight effect”. We aren’t looking for one set of keys, we’re looking for a bunch of keys (threats to human welfare) and theres a bunch of us drunkards, all with differing abilities and expertise. So I’d argue if its dark somewhere, those with the expertise need to start building streetlights, but if the lights getting brighter in certain areas (RCTs in health) then we need people there too.