There is a fundamental difference in understanding of the world’s causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
Essentially that the epistemics of EA is better than in previous longtermist movements. EA’s frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs… and so on with techniques that other movements didn’t have.
Whether or not AI risk is tractable is in doubt. Eliezer argued that it’s likely not tractable but that we should still invest in it. The longermist arguments about the value of the far future suggest that even if there’s only a 0.1% chance that AI risk is tractable we should still fund it as the most important cause.
What do you mean by this?
Essentially that the epistemics of EA is better than in previous longtermist movements. EA’s frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs… and so on with techniques that other movements didn’t have.
Whether or not AI risk is tractable is in doubt. Eliezer argued that it’s likely not tractable but that we should still invest in it. The longermist arguments about the value of the far future suggest that even if there’s only a 0.1% chance that AI risk is tractable we should still fund it as the most important cause.