Finally, I think the two approaches require very different sets of skills. My guess is that there are many more people in the EA community today (which skews young and quantitatively-inclined) with skills that are a good fit for evaluation-and-support than have skills that are an equally good fit for design-and-execution. I worry that this skills gap might increase the risk that people in the EA community might accidentally cause harm while attempting the design-and-execution approach.
This paragraph is a critical component of the argument as presently stated. However, I don’t see much more than a mere assertion that (1) certain skills are generally missing that are needed for design-and-execution (D&E) and (2) the absence of those skills increases the risk of accidental harm. In a full post, I would explain this more.
My own intuition is that a larger driver for increased harm in D&E models (vs. evaluation-and-support, E&S) may be inherent to working in a novel and neglected subject area like AI safety. In an E&S model, the startup efforts incubated independently of EA are more likely to be pretty small-scale. Even if a number of them end up being net-harmful, the risk is limited by how small they are. But in a D&E model, EA resources may be poured into an organization earlier in its life cycle, increasing the risk of significant harm it it turns out the organization was ultimately not well-conceived.
As far as mitigations, I think a presumption toward “start small, go slow” in a underdeveloped cause area for which a heavily D&E approach is necessary might be appropriate in many cases for the reason described in the paragraph above. E.g., in some cases, the objective should be to develop the ecosystem in that cause area where heavy work can begin in 7-10 years, vs. pouring in a ton of resources early and trying to get results ASAP. I think I’d like to see more ideas like than in a full post, as the suggestion to develop better “risk-management or error-correction capabilities” (while correct in my view) is also rather abstract.
I think it has potential!
This paragraph is a critical component of the argument as presently stated. However, I don’t see much more than a mere assertion that (1) certain skills are generally missing that are needed for design-and-execution (D&E) and (2) the absence of those skills increases the risk of accidental harm. In a full post, I would explain this more.
My own intuition is that a larger driver for increased harm in D&E models (vs. evaluation-and-support, E&S) may be inherent to working in a novel and neglected subject area like AI safety. In an E&S model, the startup efforts incubated independently of EA are more likely to be pretty small-scale. Even if a number of them end up being net-harmful, the risk is limited by how small they are. But in a D&E model, EA resources may be poured into an organization earlier in its life cycle, increasing the risk of significant harm it it turns out the organization was ultimately not well-conceived.
As far as mitigations, I think a presumption toward “start small, go slow” in a underdeveloped cause area for which a heavily D&E approach is necessary might be appropriate in many cases for the reason described in the paragraph above. E.g., in some cases, the objective should be to develop the ecosystem in that cause area where heavy work can begin in 7-10 years, vs. pouring in a ton of resources early and trying to get results ASAP. I think I’d like to see more ideas like than in a full post, as the suggestion to develop better “risk-management or error-correction capabilities” (while correct in my view) is also rather abstract.