Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve
Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.
A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decision-making, (2) makes explicit the interactions between these considerations, (3) gauges its own uncertainty, (4) allows for the prioritization of interventions with no common proxy measure for their impact via interventions that can be measured via several proxies, and (5) averages between multiple ways to estimate uncertain quantities.
Alternatively, a tech (charity) startup could create standardized APIs for models of small parts of the world so that they can be recombined analogously to how I can recombine many open-source React libraries to create my own software. Then an ecosystem of researchers could form who publish any models they create for everyone to use and recombine. (This could be bootstrapped via consultancy services for those groups who are interested in small parts of the world.)
People who are working on this are QURI (Ozzie Gooen, Sam Nolen), Aryeh Englander, Paal Kvarberg, and maybe others. I considered it for a few months (summary of my thinking). Some of them pursue the approach of direct modeling via Bayesian networks while QURI pursues the approach of building an ecosystem around a standardized API.
Awesome, upvoted! You can also have a look at my “Red team” proposal. It proposes to use methods from your field applied to any EA interventions (political and otherwise) to steel them against the risk of having harmful effects.
Unified, quantified world model
Epistemic Institutions, Effective Altruism, Values and Refelctive Processes, Research That Can Help Us Improve
Effective altruism started out, to some extend, with a strong focus on quantitative prioritization along the lines of GiveWell’s quantitative models, the Disease Control Priorities studies, etc. But they largely ignore complex, often nonlinear effects of these interventions on culture, international coordination, and the long-term future. Attempts to transfer the same rigor to quantative models of the long-term future (such as Tarsney’s set of models in The Epistemic Challenge to Longtermism) are still in their infancy. Otherwise effective altruist prioritization today is a grab bag of hundreds of considerations that interact in complex ways that (probably) no one has an overview over. Decision-makers may forget to take half of them into account if they haven’t recently thought about them. That makes it hard to prioritize, and misprioritization becomes more and more costly with every year.
A dedicated think tank could create and continually expand a unified world model that (1) is a repository of all considerations that affect altruistic decision-making, (2) makes explicit the interactions between these considerations, (3) gauges its own uncertainty, (4) allows for the prioritization of interventions with no common proxy measure for their impact via interventions that can be measured via several proxies, and (5) averages between multiple ways to estimate uncertain quantities.
Alternatively, a tech (charity) startup could create standardized APIs for models of small parts of the world so that they can be recombined analogously to how I can recombine many open-source React libraries to create my own software. Then an ecosystem of researchers could form who publish any models they create for everyone to use and recombine. (This could be bootstrapped via consultancy services for those groups who are interested in small parts of the world.)
People who are working on this are QURI (Ozzie Gooen, Sam Nolen), Aryeh Englander, Paal Kvarberg, and maybe others. I considered it for a few months (summary of my thinking). Some of them pursue the approach of direct modeling via Bayesian networks while QURI pursues the approach of building an ecosystem around a standardized API.
Cool—you might also be interested in my submission, “Comprehensive, personalized, open source simulation engine for public policy reforms”. It’s not in the pitch but my intent is for it to be global as well.
Awesome, upvoted! You can also have a look at my “Red team” proposal. It proposes to use methods from your field applied to any EA interventions (political and otherwise) to steel them against the risk of having harmful effects.