I agree there are diminishing returns; I think Ajeya’s report has done a bunch of what needed to be done. I’m less sure about timelines being decision-irrelevant. Maybe not for Miles, but it seems quite relevant for cause prioritisation, career-planning between causes, and prioritizing policies. I also think better timeline-related arguments could on-net improve, not worsen reputation, because improved substance and polish will actually convince some people.
On the other hand, one argument I might add is that researching timelines could shorten them, by motivating people to make AI that will be realised in their lifetimes, so timelines research can do harm.
On net, I guess I weakly agree—we seem not to be under-investing in timelines research, on the current margin. That said, AI forecasting more broadly—that considers when particular AI capabilities might arise—can be more useful than examining timelines alone, and seems quite useful overall.
That said, AI forecasting more broadly—that considers when particular AI capabilities might arise—can be more useful than examining timelines alone, and seems quite useful overall.
+1. My intuition was that forecasts on more granular capabilities would happen automatically if you want to further improve overall timeline estimates. E.g. this is my impression of what a lot of AI timeline related forecasts on Metaculus look like.
I agree there are diminishing returns; I think Ajeya’s report has done a bunch of what needed to be done. I’m less sure about timelines being decision-irrelevant. Maybe not for Miles, but it seems quite relevant for cause prioritisation, career-planning between causes, and prioritizing policies. I also think better timeline-related arguments could on-net improve, not worsen reputation, because improved substance and polish will actually convince some people.
On the other hand, one argument I might add is that researching timelines could shorten them, by motivating people to make AI that will be realised in their lifetimes, so timelines research can do harm.
On net, I guess I weakly agree—we seem not to be under-investing in timelines research, on the current margin. That said, AI forecasting more broadly—that considers when particular AI capabilities might arise—can be more useful than examining timelines alone, and seems quite useful overall.
+1. My intuition was that forecasts on more granular capabilities would happen automatically if you want to further improve overall timeline estimates. E.g. this is my impression of what a lot of AI timeline related forecasts on Metaculus look like.