If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible.
Thank you!
I feel like it’s misleading to take a paper that explicitly says “we show that strong longtermism is plausible”, does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.
I’m not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I’ve seen so far, which is admittedly not much. I’m not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.
I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation:
It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.
I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?
It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of “explicit calculations” that you seem to be against.)
I endorse the use of explicit calculations. I don’t think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I’m interested in other quantitative approaches to decision-making as discussed in the OP.
My major reservations about strong longtermism include:
I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
I am by default skeptical of the strength of causal effects without evidence, and I haven’t seen good evidence for major claims of causation I’ve seen, but I also have only started looking, and pretty passively.
Thank you!
I’m not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I’ve seen so far, which is admittedly not much. I’m not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.
I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation:
I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?
I endorse the use of explicit calculations. I don’t think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I’m interested in other quantitative approaches to decision-making as discussed in the OP.
My major reservations about strong longtermism include:
I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
I am by default skeptical of the strength of causal effects without evidence, and I haven’t seen good evidence for major claims of causation I’ve seen, but I also have only started looking, and pretty passively.
Yeah, that’s a fair point, sorry for the bad argument.