I was very surprised by the paragraph: ‘However, I also have an intuitive preference (which is related to the “burden of proof” analyses given previously) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors.’ especially in context and especially because of the use of the term ‘conservative’. I would have thought that the conservative assumption to make would be shorter timelines (since less time to prepare). If I remember correctly, Toby Ord discusses something similar in the chapter on AI risk from ‘The Precipice’: how at one of the AI safety conferences (FLI Puerto Rico 2015?) some AI researchers used the term ‘conservative’ to mean ‘we shouldn’t make wild predictions about AI’ and others to mean ‘we should be really risk-averse, so we should assume that it could happen soon’. I would have expected to see the second use here.
There are contexts in which I’d want to use the terms as you do, but I think it is often reasonable to associate “conservatism” with being more hesitant to depart from conventional wisdom, the status quo, etc. In general, I have always been sympathetic to the idea that the burden of proof/argumentation is on those who are trying to raise the priority of some particular issue or problem. I think there are good reasons to think this works better (and is more realistic and conducive to clear communication) than putting the burden of proof on people to ignore some novel issue / continue what they were doing.
I was very surprised by the paragraph: ‘However, I also have an intuitive preference (which is related to the “burden of proof” analyses given previously) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors.’ especially in context and especially because of the use of the term ‘conservative’. I would have thought that the conservative assumption to make would be shorter timelines (since less time to prepare). If I remember correctly, Toby Ord discusses something similar in the chapter on AI risk from ‘The Precipice’: how at one of the AI safety conferences (FLI Puerto Rico 2015?) some AI researchers used the term ‘conservative’ to mean ‘we shouldn’t make wild predictions about AI’ and others to mean ‘we should be really risk-averse, so we should assume that it could happen soon’. I would have expected to see the second use here.
There are contexts in which I’d want to use the terms as you do, but I think it is often reasonable to associate “conservatism” with being more hesitant to depart from conventional wisdom, the status quo, etc. In general, I have always been sympathetic to the idea that the burden of proof/argumentation is on those who are trying to raise the priority of some particular issue or problem. I think there are good reasons to think this works better (and is more realistic and conducive to clear communication) than putting the burden of proof on people to ignore some novel issue / continue what they were doing.