(I don’t have a stance on how good past (futurists’) predictions have been.)
I think we should update on how to think about serious, careful analyses like Bioanchors—or on other reasons-for-belief about the future, like scaling laws—by only a trivial amount based on the track record of past predictions. Past predictions being pretty terrible seems to me to be consistent with me being able to discern whether a prediction is reasonable, at least when I (seem to) have lots of relevant knowledge/context. If others think we should update substantially based on past futurists, I’d be excited to learn why.
in the blogpost discusses Dan Luu’s evaluations compared to Arb’s evaluations etc and why he thinks EA/LTist work is closer to that of past futurists than to e.g. superforecasters. I was originally planning to quote it but a) it’s very long and b) I couldn’t quickly come up with a good summary.
(I haven’t read the post.)
(I don’t have a stance on how good past (futurists’) predictions have been.)
I think we should update on how to think about serious, careful analyses like Bioanchors—or on other reasons-for-belief about the future, like scaling laws—by only a trivial amount based on the track record of past predictions. Past predictions being pretty terrible seems to me to be consistent with me being able to discern whether a prediction is reasonable, at least when I (seem to) have lots of relevant knowledge/context. If others think we should update substantially based on past futurists, I’d be excited to learn why.
in the blogpost discusses Dan Luu’s evaluations compared to Arb’s evaluations etc and why he thinks EA/LTist work is closer to that of past futurists than to e.g. superforecasters. I was originally planning to quote it but a) it’s very long and b) I couldn’t quickly come up with a good summary.