Firstly, you and vadmas seem to assume number 2 is the case.
Oops nope the exact opposite! Couldn’t possibly agree more strongly with
Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well
Perfect, love it, spot on. I’d be 100% on board with longtermism if this is what it’s about—hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)
Do you in fact think that knowledge creation has strong intrinsic value? I, and I suspect most EAs, only think knowledge creation is instrumentally valuable.
Well, both. I do think it’s intrinsically valuable to learn about reality, and I support research into fundamental physics, biology, history, mathematics, ethics etc for that reason. I think it would be intellectually impoverishing to only support research that has immediate and foreseeable practical benefits. But fortunately knowledge creation also has enormous instrumental value. So it’s not a one-or-the other thing.
Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well
Perfect, love it, spot on. I’d be 100% on board with longtermism if this is what it’s about—hopefully conversations like these can move it there.
I have to admit that I’m slightly confused as to where the point of contention actually is.
If you believe that working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well, then you just need to argue this case and if your argument is convincing enough you will have strong longtermists on your side.
More importantly though I’m not sure people actually do in fact disagree with this. I haven’t come across anyone who has publicly disagreed with this. Have you? It may be the case that both you and strong longtermists are actually on the exact same page without even realising it.
I don’t consider human extermination by AI to be a ‘current problem’ - I think that’s where the disagreement lies. (See my blogpost for further comments on this point)
Either way, the problems to work on would be chosen based on their longterm potential. It’s not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.
I feel a bit confused reading that. I’d thought your case was framed around a values disagreement about the worth of the long-term future. But this feels like a purely empirical disagreement about how dangerous AI is, and how tractable working on it is. And possibly a deeper epistemological disagreement about how to reason under uncertainty.
How do you feel about the case for biosecurity? That might help disentangle whether the core disagreement is about valuing the longterm future/x-risk reduction, vs concerns about epistemology and empirical beliefs, since I think the evidence base is noticeably stronger than for AI.
I think there’s a pretty strong evidence base that pandemics can happen and, eg, dangerous pathogens can get developed in labs and released from labs. And I think there’s good reason to believe that future biotechnology will be able to make dangerous pathogens, that might be able to cause human extinction, or something close to that. And that human extinction is clearly bad for both the present day, and the longterm future.
If a strong longtermist looks at this evidence, and concludes that biosecurity is a really important problem because it risks causing human extinction and thus destroying the value of the longterm future, and is a thus a really high priority, would you object to that reasoning?
It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Firstly, you and vadmas seem to assume number 2 is the case. It seems important to me to note that this is certainly not a given.
This wasn’t clearly worded in hindsight.
What I meant by this was that I think you and Ben both seem to assume that strong longtermists don’t want to work on near-term problems. I don’t think this is a given (although it is of course fair to say that they’re unlikely to only want to work on near-term problems).
What I meant by this was that I think you and Ben both seem to assume that strong longtermists don’t want to work on near-term problems. I don’t think this is a given (although it is of course fair to say that they’re unlikely to only want to work on near-term problems).
Mostly agree here—this was the reason for some of the (perhaps cryptic) paragraphs in the Section “the Antithesis of Moral Progress.” Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
Also, just a nitpick on terminology—I dislike the term “near-term” problems, because it seems to imply that there is a well-defined class of “future” problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
I don’t necessarily see working on reducing extinction risk as wildly speculating about the far future. In many cases these extinction risks are actually thought to be current risks. The point is that if they happen they necessarily curtail the far future.
Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention! As I have mentioned this is only insofar as it will make the long-run future go well.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention!
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.
Oops nope the exact opposite! Couldn’t possibly agree more strongly with
Perfect, love it, spot on. I’d be 100% on board with longtermism if this is what it’s about—hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)
Well, both. I do think it’s intrinsically valuable to learn about reality, and I support research into fundamental physics, biology, history, mathematics, ethics etc for that reason. I think it would be intellectually impoverishing to only support research that has immediate and foreseeable practical benefits. But fortunately knowledge creation also has enormous instrumental value. So it’s not a one-or-the other thing.
I have to admit that I’m slightly confused as to where the point of contention actually is.
If you believe that working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well, then you just need to argue this case and if your argument is convincing enough you will have strong longtermists on your side.
More importantly though I’m not sure people actually do in fact disagree with this. I haven’t come across anyone who has publicly disagreed with this. Have you? It may be the case that both you and strong longtermists are actually on the exact same page without even realising it.
I don’t consider human extermination by AI to be a ‘current problem’ - I think that’s where the disagreement lies. (See my blogpost for further comments on this point)
Either way, the problems to work on would be chosen based on their longterm potential. It’s not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.
I feel a bit confused reading that. I’d thought your case was framed around a values disagreement about the worth of the long-term future. But this feels like a purely empirical disagreement about how dangerous AI is, and how tractable working on it is. And possibly a deeper epistemological disagreement about how to reason under uncertainty.
How do you feel about the case for biosecurity? That might help disentangle whether the core disagreement is about valuing the longterm future/x-risk reduction, vs concerns about epistemology and empirical beliefs, since I think the evidence base is noticeably stronger than for AI.
I think there’s a pretty strong evidence base that pandemics can happen and, eg, dangerous pathogens can get developed in labs and released from labs. And I think there’s good reason to believe that future biotechnology will be able to make dangerous pathogens, that might be able to cause human extinction, or something close to that. And that human extinction is clearly bad for both the present day, and the longterm future.
If a strong longtermist looks at this evidence, and concludes that biosecurity is a really important problem because it risks causing human extinction and thus destroying the value of the longterm future, and is a thus a really high priority, would you object to that reasoning?
Apologies, I do still need to read your blogpost!
It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Yep :)
This wasn’t clearly worded in hindsight.
What I meant by this was that I think you and Ben both seem to assume that strong longtermists don’t want to work on near-term problems. I don’t think this is a given (although it is of course fair to say that they’re unlikely to only want to work on near-term problems).
Mostly agree here—this was the reason for some of the (perhaps cryptic) paragraphs in the Section “the Antithesis of Moral Progress.” Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
Also, just a nitpick on terminology—I dislike the term “near-term” problems, because it seems to imply that there is a well-defined class of “future” problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
I don’t necessarily see working on reducing extinction risk as wildly speculating about the far future. In many cases these extinction risks are actually thought to be current risks. The point is that if they happen they necessarily curtail the far future.
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention! As I have mentioned this is only insofar as it will make the long-run future go well.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
Maybe (just maybe) we’re getting somewhere here. I have no interest in adopting a ‘problem/knowledge focused ethic’. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare.
Instead most EAs want to adopt an ethic that prioritises ‘maximising welfare over the long-run’. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation—whether it does or not is essentially an empirical question. If it doesn’t require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.