I don’t consider human extermination by AI to be a ‘current problem’ - I think that’s where the disagreement lies. (See my blogpost for further comments on this point)
Either way, the problems to work on would be chosen based on their longterm potential. It’s not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.
I feel a bit confused reading that. I’d thought your case was framed around a values disagreement about the worth of the long-term future. But this feels like a purely empirical disagreement about how dangerous AI is, and how tractable working on it is. And possibly a deeper epistemological disagreement about how to reason under uncertainty.
How do you feel about the case for biosecurity? That might help disentangle whether the core disagreement is about valuing the longterm future/x-risk reduction, vs concerns about epistemology and empirical beliefs, since I think the evidence base is noticeably stronger than for AI.
I think there’s a pretty strong evidence base that pandemics can happen and, eg, dangerous pathogens can get developed in labs and released from labs. And I think there’s good reason to believe that future biotechnology will be able to make dangerous pathogens, that might be able to cause human extinction, or something close to that. And that human extinction is clearly bad for both the present day, and the longterm future.
If a strong longtermist looks at this evidence, and concludes that biosecurity is a really important problem because it risks causing human extinction and thus destroying the value of the longterm future, and is a thus a really high priority, would you object to that reasoning?
It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
I don’t consider human extermination by AI to be a ‘current problem’ - I think that’s where the disagreement lies. (See my blogpost for further comments on this point)
Either way, the problems to work on would be chosen based on their longterm potential. It’s not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.
I feel a bit confused reading that. I’d thought your case was framed around a values disagreement about the worth of the long-term future. But this feels like a purely empirical disagreement about how dangerous AI is, and how tractable working on it is. And possibly a deeper epistemological disagreement about how to reason under uncertainty.
How do you feel about the case for biosecurity? That might help disentangle whether the core disagreement is about valuing the longterm future/x-risk reduction, vs concerns about epistemology and empirical beliefs, since I think the evidence base is noticeably stronger than for AI.
I think there’s a pretty strong evidence base that pandemics can happen and, eg, dangerous pathogens can get developed in labs and released from labs. And I think there’s good reason to believe that future biotechnology will be able to make dangerous pathogens, that might be able to cause human extinction, or something close to that. And that human extinction is clearly bad for both the present day, and the longterm future.
If a strong longtermist looks at this evidence, and concludes that biosecurity is a really important problem because it risks causing human extinction and thus destroying the value of the longterm future, and is a thus a really high priority, would you object to that reasoning?
Apologies, I do still need to read your blogpost!
It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Yep :)