The ones who aimed at the distant future mostly failed. The longtermist label seems mostly unneeded and unhelpful- and Iâm far from the firstto thinkso.
Firstly, in my mind, youâre trying to say something akin to that we shouldnât advertise longtermism as it hasnât worked in the past. Yet this is a claim about the tractability of the philosophy and not necessarily about the idea that future people matter.
Donât confuse the philosophy with the instrumentals, longtermism matters, but the implementation method is still up for debate.
But I donât view the effective altruist version of longtermism as particularly unique or unprecedented.I think the dismal record of (secular) longtermism speaks for itself.
Secondly, I think youâre using the wrong outside view.
There is a problem with using historical presidents as you assume similar conditions exist in the EA community as it did in the other communities.
An example of this is HPMOR and how unpredictable the success of this fan fiction would have been if you looked at an average Harry Potter fan fiction from before. The underlying outside view is different because the underlying causal thinking is different.
As Nasim Nicholas Taleb would say, youâre trying to predict a black swan, an unprecedented event in the history of humanity.
What is it that makes longtermism different?
There is a fundamental difference in understanding of the worldâs causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
To make a final analogy, it is useless to predict gasoline prices for an electric car, just like it is useless to predict the success of the longtermist movement from previous ones.
(Good post, though, interesting investigation, and I tend to agree that we should just say holy shit, x-risk instead)
There is a fundamental difference in understanding of the worldâs causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
Essentially that the epistemics of EA is better than in previous longtermist movements. EAâs frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs⌠and so on with techniques that other movements didnât have.
Whether or not AI risk is tractable is in doubt. Eliezer argued that itâs likely not tractable but that we should still invest in it. The longermist arguments about the value of the far future suggest that even if thereâs only a 0.1% chance that AI risk is tractable we should still fund it as the most important cause.
Firstly, in my mind, youâre trying to say something akin to that we shouldnât advertise longtermism as it hasnât worked in the past. Yet this is a claim about the tractability of the philosophy and not necessarily about the idea that future people matter.
Donât confuse the philosophy with the instrumentals, longtermism matters, but the implementation method is still up for debate.
Secondly, I think youâre using the wrong outside view.
There is a problem with using historical presidents as you assume similar conditions exist in the EA community as it did in the other communities.
An example of this is HPMOR and how unpredictable the success of this fan fiction would have been if you looked at an average Harry Potter fan fiction from before. The underlying outside view is different because the underlying causal thinking is different.
As Nasim Nicholas Taleb would say, youâre trying to predict a black swan, an unprecedented event in the history of humanity.
What is it that makes longtermism different?
There is a fundamental difference in understanding of the worldâs causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
To make a final analogy, it is useless to predict gasoline prices for an electric car, just like it is useless to predict the success of the longtermist movement from previous ones.
(Good post, though, interesting investigation, and I tend to agree that we should just say holy shit, x-risk instead)
What do you mean by this?
Essentially that the epistemics of EA is better than in previous longtermist movements. EAâs frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs⌠and so on with techniques that other movements didnât have.
Whether or not AI risk is tractable is in doubt. Eliezer argued that itâs likely not tractable but that we should still invest in it. The longermist arguments about the value of the far future suggest that even if thereâs only a 0.1% chance that AI risk is tractable we should still fund it as the most important cause.
Related: Hero Licensing (the title of the first section is âOutperforming the outside viewâ).
Thank you! I was looking for this one but couldnât find it