I agree the far future is overwhelmingly important. However, I don’t think it’s been shown that focusing on the far-future really is more cost-effective, even when taking a far-future point of view. I have a degree of epistemic uncertainty with wide error bars such that I wouldn’t be too surprised if MIRI turned out to be the most cost-effective but I also wouldn’t be too surprised if it turned out that AMF was the most cost-effective. Right now, in my view, the case for the far-future seems to be arguing that if you take a large number and divide it by some unknown probability of success that you must still get a large number, where this isn’t true. I’d like organizations like MIRI to back up the claim that they have a “medium probability” of success.
I personally tend to value being able to learn about causes and being empirical about how to do good. This makes it more difficult to work in far-future causes due to the lack of feedback loops, but I don’t think it’s impossible (e.g., I like the approach being taken by AI Impacts and through Rethink Priorities I’m now working to try to refine my own views on this).
I think this update on my skepticism post still represents my current position somewhat well, though it is definitely due for an update.
Overall, I definitely favor spending resources on x-risk reduction efforts. I’m even comfortable with roughly 50% of the EA movement’s resources being spent on it, given that I sure wouldn’t want to be wrong on this issue—extinction seems like a tremendous downside! However, I’d prefer there to be more effort spent on learning what we can about the value of these efforts and I also think it’s not yet clear that poverty or animal-focused interventions are not equally or more valuable.
Lastly, as a movement, we certainly can and should do more than one thing. We can fight x-risk while also fighting malaria. I think we’d have a stronger and more robust movement this way.
I hope to write more on this in the future, eventually.
Just wanted to say that I’d be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.
I encourage you to write up your thoughts in the near-term rather than far future! :P
I agree that a lot of work on X risk/far future is value of information. But I argued here that the distributions of cost-effectiveness in the present generation of alternative food for agricultural catastrophes did not overlap with AMF. There very well could be flow-through effects from AMF to the far future, but I think it is hard to argue that they would be greater than actually addressing X risk. So I think if you do value the far future, it would be even harder to argue that the distribution of alternate foods and AMF overlap. There would be a similar results for AI vs AMF if you believe the model referred to here.
It’s certainly possible to generate a cost-effectiveness estimate that doesn’t overlap with AMF. I’d just be concerned with how well that estimate holds up to additional rigorous scrutiny. Many such estimates tend to decline dramatically as additional considerations are explored.
I agree the far future is overwhelmingly important. However, I don’t think it’s been shown that focusing on the far-future really is more cost-effective, even when taking a far-future point of view. I have a degree of epistemic uncertainty with wide error bars such that I wouldn’t be too surprised if MIRI turned out to be the most cost-effective but I also wouldn’t be too surprised if it turned out that AMF was the most cost-effective. Right now, in my view, the case for the far-future seems to be arguing that if you take a large number and divide it by some unknown probability of success that you must still get a large number, where this isn’t true. I’d like organizations like MIRI to back up the claim that they have a “medium probability” of success.
I personally tend to value being able to learn about causes and being empirical about how to do good. This makes it more difficult to work in far-future causes due to the lack of feedback loops, but I don’t think it’s impossible (e.g., I like the approach being taken by AI Impacts and through Rethink Priorities I’m now working to try to refine my own views on this).
I think this update on my skepticism post still represents my current position somewhat well, though it is definitely due for an update.
Overall, I definitely favor spending resources on x-risk reduction efforts. I’m even comfortable with roughly 50% of the EA movement’s resources being spent on it, given that I sure wouldn’t want to be wrong on this issue—extinction seems like a tremendous downside! However, I’d prefer there to be more effort spent on learning what we can about the value of these efforts and I also think it’s not yet clear that poverty or animal-focused interventions are not equally or more valuable.
Lastly, as a movement, we certainly can and should do more than one thing. We can fight x-risk while also fighting malaria. I think we’d have a stronger and more robust movement this way.
I hope to write more on this in the future, eventually.
Just wanted to say that I’d be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.
I encourage you to write up your thoughts in the near-term rather than far future! :P
I liked this solely for the pun. Solid work, James.
I agree that a lot of work on X risk/far future is value of information. But I argued here that the distributions of cost-effectiveness in the present generation of alternative food for agricultural catastrophes did not overlap with AMF. There very well could be flow-through effects from AMF to the far future, but I think it is hard to argue that they would be greater than actually addressing X risk. So I think if you do value the far future, it would be even harder to argue that the distribution of alternate foods and AMF overlap. There would be a similar results for AI vs AMF if you believe the model referred to here.
It’s certainly possible to generate a cost-effectiveness estimate that doesn’t overlap with AMF. I’d just be concerned with how well that estimate holds up to additional rigorous scrutiny. Many such estimates tend to decline dramatically as additional considerations are explored.