To make a strong case for strong longtermism or a particular longtermist intervention, without relying too much on quantitative models and speculation.
I see the core case of the paper as this:
(...)
I also don’t see how this argument is speculative: it seems really hard to me to argue that any of the assumptions or inferences are false.
I don’t disagree with the claim that strong longtermism is plausible, but shortermism is also plausible. The case for strong lontermism rests on actually identifying robustly positive interventions aimed at the far future, and making a strong argument that they are indeed robustly positive (and much better than shorttermist alternatives). One way of operationalizing “robustly positive” is that I may have multiple judgements of EV for different plausible worldviews, and each should be positive (although this is a high bar). I think their defences of particular longtermist interventions are speculative (including patient philanthropy), but expecting more might be unreasonable for a paper of that length which isn’t focused on any particular intervention.
I’m interpreting this as “I don’t have >90% confidence that GFI has actually had non-trivial impact so far (i.e. an ex-post evaluation)”.
Yes, and I’m also not willing to commit to any specific degree of confidence, since I haven’t seen any in particular justified. This is also for future impact. Why shouldn’t my prior for success be < 1%? Can I rule out a negative expected impact?
However, if you think this should be society’s bar for investing millions of dollars, you would also have to be against many startups, nearly all VCs and angel funding, the vast majority of scientific R&D, some government megaprojects, etc. This bar seems clearly too stringent to me. You need some way of doing something like hits-based funding.
I think in many of these cases we could develop some reasonable probability distributions to inform us (and when multiple priors are reasonable for many interventions and we have deep uncertainty, diversification might help). FHI has done some related work on the cost-effectiveness of research. It could turn out to be that the successes don’t (or ex ante won’t) justify the failures in a particular domain. Hits-based funding shouldn’t be taken for granted.
I feel like it’s misleading to take a paper that explicitly says “we show that strong longtermism is plausible”, does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.
If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible. (Personally, I prefer a different argument, but I think the one in HC is pretty robust and only depends on the assumption that we will build intelligent AI systems in the near-ish future, say by 2100.)
Yes, and I’m also not willing to commit to any specific degree of confidence, since I haven’t seen any in particular justified. This is also for future impact. Why shouldn’t my prior for success be < 1%? Can I rule out a negative expected impact?
Idk what’s happening with GFI, so I’m going to bow out of this discussion. (Though one obvious hypothesis is that GFI’s main funders have more information than you do.)
Hits-based funding shouldn’t be taken for granted.
I mean, of course, but it’s not like people just throw money randomly in the air. They use the sorts of arguments you’re complaining about to figure out where to try for a hit. What should they do instead? Can you show examples of that working for startups, VC funding, scientific R&D, etc? You mention two things:
Developing reasonable probability distributions
Diversification
It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of “explicit calculations” that you seem to be against.)
If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible.
Thank you!
I feel like it’s misleading to take a paper that explicitly says “we show that strong longtermism is plausible”, does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.
I’m not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I’ve seen so far, which is admittedly not much. I’m not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.
I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation:
It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.
I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?
It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of “explicit calculations” that you seem to be against.)
I endorse the use of explicit calculations. I don’t think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I’m interested in other quantitative approaches to decision-making as discussed in the OP.
My major reservations about strong longtermism include:
I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
I am by default skeptical of the strength of causal effects without evidence, and I haven’t seen good evidence for major claims of causation I’ve seen, but I also have only started looking, and pretty passively.
To make a strong case for strong longtermism or a particular longtermist intervention, without relying too much on quantitative models and speculation.
I don’t disagree with the claim that strong longtermism is plausible, but shortermism is also plausible. The case for strong lontermism rests on actually identifying robustly positive interventions aimed at the far future, and making a strong argument that they are indeed robustly positive (and much better than shorttermist alternatives). One way of operationalizing “robustly positive” is that I may have multiple judgements of EV for different plausible worldviews, and each should be positive (although this is a high bar). I think their defences of particular longtermist interventions are speculative (including patient philanthropy), but expecting more might be unreasonable for a paper of that length which isn’t focused on any particular intervention.
Yes, and I’m also not willing to commit to any specific degree of confidence, since I haven’t seen any in particular justified. This is also for future impact. Why shouldn’t my prior for success be < 1%? Can I rule out a negative expected impact?
I think in many of these cases we could develop some reasonable probability distributions to inform us (and when multiple priors are reasonable for many interventions and we have deep uncertainty, diversification might help). FHI has done some related work on the cost-effectiveness of research. It could turn out to be that the successes don’t (or ex ante won’t) justify the failures in a particular domain. Hits-based funding shouldn’t be taken for granted.
I feel like it’s misleading to take a paper that explicitly says “we show that strong longtermism is plausible”, does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.
If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible. (Personally, I prefer a different argument, but I think the one in HC is pretty robust and only depends on the assumption that we will build intelligent AI systems in the near-ish future, say by 2100.)
Idk what’s happening with GFI, so I’m going to bow out of this discussion. (Though one obvious hypothesis is that GFI’s main funders have more information than you do.)
I mean, of course, but it’s not like people just throw money randomly in the air. They use the sorts of arguments you’re complaining about to figure out where to try for a hit. What should they do instead? Can you show examples of that working for startups, VC funding, scientific R&D, etc? You mention two things:
Developing reasonable probability distributions
Diversification
It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of “explicit calculations” that you seem to be against.)
Thank you!
I’m not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I’ve seen so far, which is admittedly not much. I’m not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.
I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation:
I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?
I endorse the use of explicit calculations. I don’t think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I’m interested in other quantitative approaches to decision-making as discussed in the OP.
My major reservations about strong longtermism include:
I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
I am by default skeptical of the strength of causal effects without evidence, and I haven’t seen good evidence for major claims of causation I’ve seen, but I also have only started looking, and pretty passively.
Yeah, that’s a fair point, sorry for the bad argument.