A few things come to mind. First, Iâve been really struck by how robust animal welfare work is across lots of kinds of uncertainties. It has some of the virtues of both GHD (a high probability of actually making a difference) and x-risk work (huge scales). Second, when working with the Moral Parliament tool, it is really striking how much of a difference different aggregation methods make. If we use approval voting to navigate moral uncertainty, we get really different recommendations than if we give every worldview control over a share of the pie or if we maximize expected choiceworthiness. For me, figuring out which method we should use turns on what kind of community we want to be and which (or whether!) democratic ideals should govern our decision-making. This seems like an issue we can make headway on, even if there are empirical or moral uncertainties that prove less tractable.
I was personally struck by how sensitive portfolios are to even modest levels of risk aversion. I donât know what âcorrectâ level of risk aversion is, or what the optimal decision procedure is in practice (even though most of my theoretical sympathies lie with expected value maximisation). Even so, seeing how introducing bits of risk aversion, even when using parameters relatively generous towards x-risk, still points towards spending most resources on animals (and sometimes global health) has led me to believe that type of work is robustly better than I used to think. There are many uncertainties and I donât think EA should be reduced to any one of its cause-areas but, especially given this update, I would be sad to see the animal space shrink in relative size any more than it has.
One of the big prioritization changes Iâve taken away from our tools is within longtermism. Playing around with our Cross-Cause Cost-Effectiveness Model, it was clear to me that so much of the expected value of the long-term future comes from the direction we expect it to take, rather than just whether it happens at all. If you can shift that direction a little bit, it makes a huge difference to overall value. I no longer think that extinction risk work is the best kind of intervention if youâre worried about the long-term future. I tend to think that AI (non-safety) policy work is more impactful in expectation, if we worked through all of the details.
Iâm a âchickens and childrenâ EA, having come to the movement through Singerâs arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where itâs clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesnât appeal to me at all personally. So you probably wonât catch me pivoting to AI governance anytime soon, but Iâm glad others are doing it.
Has anyone on the team changed their mind about their priorities/â certainty levels because of the output of one of your tools?
A few things come to mind. First, Iâve been really struck by how robust animal welfare work is across lots of kinds of uncertainties. It has some of the virtues of both GHD (a high probability of actually making a difference) and x-risk work (huge scales). Second, when working with the Moral Parliament tool, it is really striking how much of a difference different aggregation methods make. If we use approval voting to navigate moral uncertainty, we get really different recommendations than if we give every worldview control over a share of the pie or if we maximize expected choiceworthiness. For me, figuring out which method we should use turns on what kind of community we want to be and which (or whether!) democratic ideals should govern our decision-making. This seems like an issue we can make headway on, even if there are empirical or moral uncertainties that prove less tractable.
I was personally struck by how sensitive portfolios are to even modest levels of risk aversion. I donât know what âcorrectâ level of risk aversion is, or what the optimal decision procedure is in practice (even though most of my theoretical sympathies lie with expected value maximisation). Even so, seeing how introducing bits of risk aversion, even when using parameters relatively generous towards x-risk, still points towards spending most resources on animals (and sometimes global health) has led me to believe that type of work is robustly better than I used to think. There are many uncertainties and I donât think EA should be reduced to any one of its cause-areas but, especially given this update, I would be sad to see the animal space shrink in relative size any more than it has.
One of the big prioritization changes Iâve taken away from our tools is within longtermism. Playing around with our Cross-Cause Cost-Effectiveness Model, it was clear to me that so much of the expected value of the long-term future comes from the direction we expect it to take, rather than just whether it happens at all. If you can shift that direction a little bit, it makes a huge difference to overall value. I no longer think that extinction risk work is the best kind of intervention if youâre worried about the long-term future. I tend to think that AI (non-safety) policy work is more impactful in expectation, if we worked through all of the details.
Iâm a âchickens and childrenâ EA, having come to the movement through Singerâs arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where itâs clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesnât appeal to me at all personally. So you probably wonât catch me pivoting to AI governance anytime soon, but Iâm glad others are doing it.