For the issue in general (not specific to the area of IIDM or how EAIF thinks about things), there’s some discussion from 80k here and in other parts of that article. Probably also in some other posts tagged accidental harm.
(Though note that 80k include various caveats and counterpoints, and conclude the article with:
We hope this discussion of ways to do bad hasn’t been demotivating. We think most projects that have gone forward in the EA community have had a positive expected value and when we hear about new projects we’re typically excited, not wary. Even projects that are ill-conceived to start with typically improve over time as the founders get feedback and learn from experience. So whenever you consider these risks (and our advice for mitigating them) make sure to weigh them against the potentially massive benefits of working on some of the world’s most pressing problems.
I say that just to avoid people being overly discouraged by reading a single section from the middle of that article, without the rest of the context. I don’t say this to imply I disagree with Max’s comments on the IIDM grant.)
Thanks! The 80,000Hours article kind of makes it sound like it‘s not supposed to be a big consideration and can be addressed by things IIDM has clearly done, right?
Get advice from people you trust to be honest about whether you’re a reasonable fit for the project you’re considering. Ask around to see if anybody else in your field has similar plans; maybe you should merge projects, collaborate, or coordinate on which project should move forward.
My impression is that the IIDM group is happy for any people interested in collaborating and called for collaboration a year ago or so, and the space of improving institutions also seems very big (in comparison to 80k‘s examples of career advice for EAs and local EA chapters).
(Disclaimer: speaking for myself here, not the IIDM group.)
My understanding is that Max is concerned about something fairly specific here, which is a situation in which we are successful in capturing a significant share of the EA community’s interest, talent, and/​or funding, yet failing to either imagine or execute on the best ways of leveraging those resources.
While I could imagine something like this happening, it’s only really a big problem if either a) the ways in which we’re falling short remain invisible to the relevant stakeholders, or b) our group proves to be difficult to influence. I’m not especially worried about a) given that critical feedback is pretty much the core competency of the EA community and most of our work will have some sort of public-facing component. b) is something we can control and, while it’s not always easy to judge how to balance external feedback against our inside-view perspectives, as you’ve pointed out we’ve been pretty intentional about trying to work well with other people in the space and cede responsibility/​consider changing direction where it seems appropriate to do so.
For the issue in general (not specific to the area of IIDM or how EAIF thinks about things), there’s some discussion from 80k here and in other parts of that article. Probably also in some other posts tagged accidental harm.
(Though note that 80k include various caveats and counterpoints, and conclude the article with:
I say that just to avoid people being overly discouraged by reading a single section from the middle of that article, without the rest of the context. I don’t say this to imply I disagree with Max’s comments on the IIDM grant.)
Thanks! The 80,000Hours article kind of makes it sound like it‘s not supposed to be a big consideration and can be addressed by things IIDM has clearly done, right?
My impression is that the IIDM group is happy for any people interested in collaborating and called for collaboration a year ago or so, and the space of improving institutions also seems very big (in comparison to 80k‘s examples of career advice for EAs and local EA chapters).
(Disclaimer: speaking for myself here, not the IIDM group.)
My understanding is that Max is concerned about something fairly specific here, which is a situation in which we are successful in capturing a significant share of the EA community’s interest, talent, and/​or funding, yet failing to either imagine or execute on the best ways of leveraging those resources.
While I could imagine something like this happening, it’s only really a big problem if either a) the ways in which we’re falling short remain invisible to the relevant stakeholders, or b) our group proves to be difficult to influence. I’m not especially worried about a) given that critical feedback is pretty much the core competency of the EA community and most of our work will have some sort of public-facing component. b) is something we can control and, while it’s not always easy to judge how to balance external feedback against our inside-view perspectives, as you’ve pointed out we’ve been pretty intentional about trying to work well with other people in the space and cede responsibility/​consider changing direction where it seems appropriate to do so.