Point D. sounds li, but can be avoided just by thinking carefully at each step (it only applies to very naive implementations). And you mention other counter considerations yourself. Some more thoughts in reply:
If we don’t get longtermism right, we’ll no longer be in a position to deliberately affect the course of the future (accordingly, “future neartermists” won’t be in a position to do any good, either)
Even worse, if we get things especially wrong, we might accidentally lock in unusually bad futures
If we get longtermism right, we’d use the transition to TAI to gain better control over the future to no longer live in a state where the world is metaphorically burning (in other words, future near-termists won’t be as important anymore)
Intermediates states where things continue as they are now (people can affect things but don’t have sufficient control to get the world they want) seem unstable.
The last bullet point seems right to me because technological progress increases the damage of things going wrong and it “accelerates history” – the combination of these factors leads to massive instability. Technology progress also improves our potential reach for attaining control over things and make them stable, up to the point where someone messes it up irreversibly.
I’m pessimistic about attaining the high degrees of control it would require to make the future go really well. In my view, one argument for focusing on ongoing animal suffering is “maybe the long-term future will be out of our control eventually no matter what we do.” (This point applies especially to people whose comparative advantage might be near-term suffering reduction.) However, other people are more optimistic.
Point E. seems true and important, but some of the texts you cite seem one-sided to me. (Here’s a counter consideration I rarely see mentioned in these texts; it relates to what I said in reply to your point D.)
The other arguments/points you make sound like “longtermists might be biased/rationalizing/speciesists.”
I wonder where that’s coming from? I think it would be more potentially persuasive to focus on direct reasons why reducing animal suffering is a good opportunity for impact. We all might be biased in various ways, so appeals to biases/irrationality rarely do much. (Also, I don’t think there’s “one best cause” so different people will care about different causes depending on their moral views.)
I don’t take point D that seriously. Aesop’s miser is worth keeping in mind; the “longevity researcher eating junk every day” is maybe a more relatable analogy. I’m ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I’m no expert on that. But anyway I think A, B and E are stronger.
Yeah, “Longtermists might be biased” pretty much sums it up. Do you not find examining/becoming more self-aware of biases constructive? To me it’s pretty central to cause prioritization, drowning children, rationalism, longtermism itself… Couldn’t we see cause prioritization as peeling away our biases one by one? But yes, it would be reasonable to accompany “Here’s why we might be biased against nonhumans” with “Here are some object-level arguments that animal suffering deserves attention.”
Point D. sounds li, but can be avoided just by thinking carefully at each step (it only applies to very naive implementations). And you mention other counter considerations yourself. Some more thoughts in reply:
If we don’t get longtermism right, we’ll no longer be in a position to deliberately affect the course of the future (accordingly, “future neartermists” won’t be in a position to do any good, either)
Even worse, if we get things especially wrong, we might accidentally lock in unusually bad futures
If we get longtermism right, we’d use the transition to TAI to gain better control over the future to no longer live in a state where the world is metaphorically burning (in other words, future near-termists won’t be as important anymore)
Intermediates states where things continue as they are now (people can affect things but don’t have sufficient control to get the world they want) seem unstable.
The last bullet point seems right to me because technological progress increases the damage of things going wrong and it “accelerates history” – the combination of these factors leads to massive instability. Technology progress also improves our potential reach for attaining control over things and make them stable, up to the point where someone messes it up irreversibly.
I’m pessimistic about attaining the high degrees of control it would require to make the future go really well. In my view, one argument for focusing on ongoing animal suffering is “maybe the long-term future will be out of our control eventually no matter what we do.” (This point applies especially to people whose comparative advantage might be near-term suffering reduction.) However, other people are more optimistic.
Point E. seems true and important, but some of the texts you cite seem one-sided to me. (Here’s a counter consideration I rarely see mentioned in these texts; it relates to what I said in reply to your point D.)
The other arguments/points you make sound like “longtermists might be biased/rationalizing/speciesists.”
I wonder where that’s coming from? I think it would be more potentially persuasive to focus on direct reasons why reducing animal suffering is a good opportunity for impact. We all might be biased in various ways, so appeals to biases/irrationality rarely do much. (Also, I don’t think there’s “one best cause” so different people will care about different causes depending on their moral views.)
I don’t take point D that seriously. Aesop’s miser is worth keeping in mind; the “longevity researcher eating junk every day” is maybe a more relatable analogy. I’m ambivalent on hinginess because I think the future may remain wide-open and high-stakes for centuries to come, but I’m no expert on that. But anyway I think A, B and E are stronger.
Yeah, “Longtermists might be biased” pretty much sums it up. Do you not find examining/becoming more self-aware of biases constructive? To me it’s pretty central to cause prioritization, drowning children, rationalism, longtermism itself… Couldn’t we see cause prioritization as peeling away our biases one by one? But yes, it would be reasonable to accompany “Here’s why we might be biased against nonhumans” with “Here are some object-level arguments that animal suffering deserves attention.”