Figuring out AGI timelines might be overrated compared to other AI forecasting questions (for eg: things related to takeoff / AI takeover etc) because the latter are more neglected. However, it still seems likely to me that more people within EA should be thinking about AGI timelines because so many personal career decisions are downstream of your beliefs about timelines.
Some things look much less appealing if you think AGI is < 10 years away, such as getting credentials and experience working on something that is not AI safety-related or spending time on community building projects directed towards high schoolers. It also feels like a lot of other debates are very linked to the question about timelines. For example, lots of people disagree about the probability of AI causing an existential catastrophe but my intuition is that the disagreement around this probability conditioning on specific timelines would be a lot less (that is, people with higher p(doom) compared to the average mostly think that way because they think timelines are shorter).
More timelines discourse would be good for the reputation of the community because it will likely convince others of AI x-risk being a massive problem. Non-EA folks I know who were previously unconcerned about AI x-risk were much more convinced when they read Holden’s posts on AI forecasting and learned about Ajeya’s bioanchors model (more than when they simply read descriptions of the alignment problem). More discussion of timelines would also signal to outsiders that we take this seriously.
It feels like people who have timelines similar to most other people in the community (~20 years away) would be more likely to agree with this than people with much shorter or longer timelines because, for the latter group, it makes sense to put more effort into convincing the community of their position or just because they can defer less to the community when deciding what to do with their career.
Lots of people in the community defer to others (esp Ajeya/bioanchors) when it comes to timelines but should probably spend more time developing their own thoughts and thinking about the implications of that.
Figuring out AGI timelines might be overrated compared to other AI forecasting questions (for eg: things related to takeoff / AI takeover etc) because the latter are more neglected. However, it still seems likely to me that more people within EA should be thinking about AGI timelines because so many personal career decisions are downstream of your beliefs about timelines.
Some things look much less appealing if you think AGI is < 10 years away, such as getting credentials and experience working on something that is not AI safety-related or spending time on community building projects directed towards high schoolers. It also feels like a lot of other debates are very linked to the question about timelines. For example, lots of people disagree about the probability of AI causing an existential catastrophe but my intuition is that the disagreement around this probability conditioning on specific timelines would be a lot less (that is, people with higher p(doom) compared to the average mostly think that way because they think timelines are shorter).
More timelines discourse would be good for the reputation of the community because it will likely convince others of AI x-risk being a massive problem. Non-EA folks I know who were previously unconcerned about AI x-risk were much more convinced when they read Holden’s posts on AI forecasting and learned about Ajeya’s bioanchors model (more than when they simply read descriptions of the alignment problem). More discussion of timelines would also signal to outsiders that we take this seriously.
It feels like people who have timelines similar to most other people in the community (~20 years away) would be more likely to agree with this than people with much shorter or longer timelines because, for the latter group, it makes sense to put more effort into convincing the community of their position or just because they can defer less to the community when deciding what to do with their career.
Lots of people in the community defer to others (esp Ajeya/bioanchors) when it comes to timelines but should probably spend more time developing their own thoughts and thinking about the implications of that.