Thanks, this is a good challenge! The short response is that I don’t have an effectiveness model on hand for any existing projects. A slightly longer response is that most of the work so far has been “meta,” (including growing the size and power of the EA movement, but also all sorts of prioritization and roadmap research), except maybe in AI where we mostly just really lack strategic clarity to confidently say anything about whether we are increasing or decreasing x-risks. I think those things are harder to map out the effectiveness numbers of, compared to future engineering “megaprojects” where we are concretely targeting a particular existential risk channel and arguing that we can block a percentage of it if our projects scale well.
But I think the best way to answer the spirit of your question is to consider large-scale scientific and engineering projects of the future* and do rough BOTECs on how much existential risk they can avert.
I think this might be a good/even necessary template before the .01% fund can become a reality. If such BOTECs are interesting to you and/or other future grantseekers, I’m happy to do so, or commission other people to do so.
*including both projects with a fair amount of active EA work on (like vaccine platform development, certain forecasting projects, and metagenomic sequencing) and projects with very little current EA work on (like civilizational refuges).
Thanks, this is a good challenge! The short response is that I don’t have an effectiveness model on hand for any existing projects. A slightly longer response is that most of the work so far has been “meta,” (including growing the size and power of the EA movement, but also all sorts of prioritization and roadmap research), except maybe in AI where we mostly just really lack strategic clarity to confidently say anything about whether we are increasing or decreasing x-risks. I think those things are harder to map out the effectiveness numbers of, compared to future engineering “megaprojects” where we are concretely targeting a particular existential risk channel and arguing that we can block a percentage of it if our projects scale well.
But I think the best way to answer the spirit of your question is to consider large-scale scientific and engineering projects of the future* and do rough BOTECs on how much existential risk they can avert.
I think this might be a good/even necessary template before the .01% fund can become a reality. If such BOTECs are interesting to you and/or other future grantseekers, I’m happy to do so, or commission other people to do so.
*including both projects with a fair amount of active EA work on (like vaccine platform development, certain forecasting projects, and metagenomic sequencing) and projects with very little current EA work on (like civilizational refuges).