Objection: For any action, it’s usually much harder to get evidence about it’s long-term effects than it’s near-term effects. So, given that we are using evidence to improve the world, maybe we should focus on the effects we can measure. It could be much easier to make a dent in near-term problems because we have much more evidence about them.
Response: It’s true that we don’t have much evidence about the long-term effects of our actions. But if we think those effects are morally relevant, we cannot ignore them (this is complex cluelessness, not simple). Rather, we should invest resources in getting more evidence about those effects.
This is a claim that researching these effects will cause us to know enough to change my mind about what to do, but I’m actually by default skeptical of this. The reason for my skepticism is in your next sentence:
Unfortunately, this evidence isn’t going to be through randomised controlled trials (RCTs) or anything as rigorous as that.
Of course, sometimes researching effects looks better to me ahead of time, if I think the evidence I’ll get out of it will be of sufficient quality. I’m also okay with research that’s less rigorous than RCTs, but I do still have standards for rigour that long-term-focused interventions don’t meet.
In the case of animal welfare specifically, my expectation is that cultured and plant-based animal products will mostly replace real animal products soon-ish (in the next 200 years), but I’m not confident in any specific attempts to speed this up, while I am confident about the value of, say, corporate campaigns. I’m separately skeptical of the importance of the effects on things like momentum and complacency; I am not assuming they cancel.
In the case of development, there are economic growth projections, too, and I might not be confident about specific attempts to speed this up.
In both cases, there might be limited value in further studying the issues, because it will take time to get convincing data that could change your mind about what to do, the probability that this even happens is low if the evidence won’t be rigorous enough, and by the time you would have convincing data, all the low-hanging fruit could be gone.
Of course, I’m not actually confident about this. I’d want to review what evidence we have now first, try to come up with causal models for their effects, and think more about what it would take to change my mind if I’m still skeptical of the value of these other interventions. But after doing so, I might remain skeptical of the value of further research and these other interventions.
Ah ok. Can you say a bit more about why long-term-focused interventions don’t meet your standards for rigour? I guess you take speculation about long-term effects as Bayesian evidence, but only as extremely weak evidence compared to evidence about near-term effects. Is that right?
I guess you take speculation about long-term effects as Bayesian evidence, but only as extremely weak evidence compared to evidence about near-term effects. Is that right?
I think I basically completely discount speculation about long-term effects unless it comes with an effect size estimate justified by observation, and I haven’t seen any (although they might still be out there).
On the other hand, we can actually observe short-term effects (from similar interventions or the same intervention in a different context).
I think I’m particularly skeptical of the benefits of technical research for longtermist interventions, e.g. technical AI safety research, since there’s little precedent or feedback. For example, “How much closer does this paper bring us to solving AI safety?” My impression is that it’s basically just speculation that the research does anything useful at all in expectation. I’ve been meaning to get through this, though, and then there’s a separate question about the quality of research, especially research that doesn’t get published in journals (although some of it is).
There are also the reference class problem and issues in generalizability, and we can’t know how bad they are for longtermist work, since we don’t have good feedback.
This is a claim that researching these effects will cause us to know enough to change my mind about what to do, but I’m actually by default skeptical of this. The reason for my skepticism is in your next sentence:
Of course, sometimes researching effects looks better to me ahead of time, if I think the evidence I’ll get out of it will be of sufficient quality. I’m also okay with research that’s less rigorous than RCTs, but I do still have standards for rigour that long-term-focused interventions don’t meet.
In the case of animal welfare specifically, my expectation is that cultured and plant-based animal products will mostly replace real animal products soon-ish (in the next 200 years), but I’m not confident in any specific attempts to speed this up, while I am confident about the value of, say, corporate campaigns. I’m separately skeptical of the importance of the effects on things like momentum and complacency; I am not assuming they cancel.
In the case of development, there are economic growth projections, too, and I might not be confident about specific attempts to speed this up.
In both cases, there might be limited value in further studying the issues, because it will take time to get convincing data that could change your mind about what to do, the probability that this even happens is low if the evidence won’t be rigorous enough, and by the time you would have convincing data, all the low-hanging fruit could be gone.
Of course, I’m not actually confident about this. I’d want to review what evidence we have now first, try to come up with causal models for their effects, and think more about what it would take to change my mind if I’m still skeptical of the value of these other interventions. But after doing so, I might remain skeptical of the value of further research and these other interventions.
Ah ok. Can you say a bit more about why long-term-focused interventions don’t meet your standards for rigour? I guess you take speculation about long-term effects as Bayesian evidence, but only as extremely weak evidence compared to evidence about near-term effects. Is that right?
I think I basically completely discount speculation about long-term effects unless it comes with an effect size estimate justified by observation, and I haven’t seen any (although they might still be out there).
On the other hand, we can actually observe short-term effects (from similar interventions or the same intervention in a different context).
I think I’m particularly skeptical of the benefits of technical research for longtermist interventions, e.g. technical AI safety research, since there’s little precedent or feedback. For example, “How much closer does this paper bring us to solving AI safety?” My impression is that it’s basically just speculation that the research does anything useful at all in expectation. I’ve been meaning to get through this, though, and then there’s a separate question about the quality of research, especially research that doesn’t get published in journals (although some of it is).
There are also the reference class problem and issues in generalizability, and we can’t know how bad they are for longtermist work, since we don’t have good feedback.