We could track the “stage” of a given problem/cause area, in a similar way that startups are tracked by Seed, Series A, etc. In other words, EA prioritization would be categorized w.r.t. stages/gates. I’m not sure if there’s an agreed on “stage terminology” in the EA community yet. (I know GiveWell’s Incubation Grants http://www.givewell.org/research/incubation-grants and EAGrants https://www.effectivealtruism.org/grants/ are examples of recent “early stage” investment.) Here would be some example stages:
Stage 1) Medium dive into the problem area to determine ITN.
Stage 2) Experiment with MVP solutions to the problem.
Stage 3) Move up the hierarchy of evidence for those solutions—RCTs, etc.
Stage 4) For top solutions with robust cost-effectiveness data, begin to scale.
(You could create something like a “Lean Canvas for EA Impact” that could map the prioritized derisking of these stages.)
From the “future macro trends” perspective, I feel like there could be more overlap between EA and VC models that are designed to predict the future. I’m imagining this like the current co-evolving work environment with “profit-focused AI” (DeepMind, etc.) and “EA-focused AI” (OpenAI, etc.). In this area, both groups are helping each other pursue their goals. We could imagine a similar system, but for any given macro trend. i.e. That macro trend is viewed from a profit perspective and an impact/EA perspective.
In other words, this is a way for the EA community to say “The VC world has [x technological trend] high on their prioritization list. How should we take part from an EA perspective?” (And vice versa.)
(fwiw, I see two main ways the EA community interacts in this space—pursuing projects that either a) leverage or b) counteract the negative externalities of new technologies. Using VR for animal empathy is an example of leverage. AI alignment is an example of counteracting a negative externality.)
Do those examples help give a bit of specificity for how the EA + VC communities could co-evolve in “future uncertainty prediction”?
Yep yep, happy to! A couple things come to mind:
We could track the “stage” of a given problem/cause area, in a similar way that startups are tracked by Seed, Series A, etc. In other words, EA prioritization would be categorized w.r.t. stages/gates. I’m not sure if there’s an agreed on “stage terminology” in the EA community yet. (I know GiveWell’s Incubation Grants http://www.givewell.org/research/incubation-grants and EAGrants https://www.effectivealtruism.org/grants/ are examples of recent “early stage” investment.) Here would be some example stages:
Stage 1) Medium dive into the problem area to determine ITN. Stage 2) Experiment with MVP solutions to the problem. Stage 3) Move up the hierarchy of evidence for those solutions—RCTs, etc. Stage 4) For top solutions with robust cost-effectiveness data, begin to scale.
(You could create something like a “Lean Canvas for EA Impact” that could map the prioritized derisking of these stages.)
From the “future macro trends” perspective, I feel like there could be more overlap between EA and VC models that are designed to predict the future. I’m imagining this like the current co-evolving work environment with “profit-focused AI” (DeepMind, etc.) and “EA-focused AI” (OpenAI, etc.). In this area, both groups are helping each other pursue their goals. We could imagine a similar system, but for any given macro trend. i.e. That macro trend is viewed from a profit perspective and an impact/EA perspective.
In other words, this is a way for the EA community to say “The VC world has [x technological trend] high on their prioritization list. How should we take part from an EA perspective?” (And vice versa.)
(fwiw, I see two main ways the EA community interacts in this space—pursuing projects that either a) leverage or b) counteract the negative externalities of new technologies. Using VR for animal empathy is an example of leverage. AI alignment is an example of counteracting a negative externality.)
Do those examples help give a bit of specificity for how the EA + VC communities could co-evolve in “future uncertainty prediction”?