For these reasons I do not believe the EA movement should focus too much or too exclusively on LLMs or similar models as candidates for an AGI precursor, or put too much of a focus on short time horizons. We should pursue a diverse range of strategies for mitigating AI risk, and devote significant resources towards longer time horizons.
Do you think that most strategies that are potentially useful given short timelines remain so as timelines lengthen? (i.e. where the effectiveness of the strategy is timeline-independent)
Which assumption carries the largest penalty if incorrect? (anticipating and planning for shorter timelines and being wrong vs. anticipating and planning for longer timelines and being wrong)
Do you think that most strategies that are potentially useful given short timelines remain so as timelines lengthen? (i.e. where the effectiveness of the strategy is timeline-independent)
Which assumption carries the largest penalty if incorrect? (anticipating and planning for shorter timelines and being wrong vs. anticipating and planning for longer timelines and being wrong)