Thanks Benjamin, I upvoted. Some things to clarify on my end:
I think the article as a whole was good, or I would have said so!
I did and do think a) that Anthropic Person (AP) was probably right, b) that their attitude was nevertheless irresponsible and epistemically poor and c) that I made it clear that I thought despite them being probably right this needed more justification at the time (the last exchange I have a record of was me reopening the comment thread that you’d resolved to state that I really did think this was important and you re-closing it without further comment)
My concern with poor epistemics was less with reference to you—I presume you were working under time constraints in an area you didn’t have specialist knowledge on—than to AP, who had no such excuse.
I would have had no factual problem with the claim ‘many experts believe’. The phrasing that I challenged, and the grounds I gave for challenging it was that ‘many experts now believe’ (emphasis mine) implies positive change over time—that the proportion of experts who believe this is increasing. That doesn’t seem anything like as self-evident as a comment about the POTUS.
Fwiw I think the rate of change (and possibly even second derivative) of expert beliefs on such a speculative and rapidly evolving subject is much more important than the absolute number or even proportion of experts with the relevant belief, especially since it’s very hard to define who even qualifies as an expert in such a field (per my comment, most of the staff at Google—and other big AI companies—could arguably qualify)
If you’d mentioned that other experts you’d spoken to had a sense that sentiment was changing (and mentioned those conversations as a pseudocitation) I would have been substantially less concerned by the point—though I do think it’s important enough to merit proper research (though such research would have probably been beyond the scope of your 80k piece), and not to imply stronger conclusions than the evidence we have merits.
I am definitely worried about selection bias—the whole concept of ‘AI safety researcher’ screams it (not that I think you only spoke to such people, but any survey which includes them and excludes ‘AI-safety-concern-rejecting researcher/developer’ seems highly likely to get prejudicial results)
But in general I think that the best way, as an outsider, to understand what prevailing opinions are in a field, is to talk to people in that field – rather than relying on your own ability to figure out trends across many papers, many of which are difficult to evaluate, many of which may not replicate. I also think that asking about what others in the field think, rather than what the people you’re talking to think, is a decent (if imperfect) way of dealing with that bias.
I don’t understand what counterclaim you’re making here. I strongly agree that the opinions of experts are very important, hence my entire concern about this exchange!
Thanks Benjamin, I upvoted. Some things to clarify on my end:
I think the article as a whole was good, or I would have said so!
I did and do think a) that Anthropic Person (AP) was probably right, b) that their attitude was nevertheless irresponsible and epistemically poor and c) that I made it clear that I thought despite them being probably right this needed more justification at the time (the last exchange I have a record of was me reopening the comment thread that you’d resolved to state that I really did think this was important and you re-closing it without further comment)
My concern with poor epistemics was less with reference to you—I presume you were working under time constraints in an area you didn’t have specialist knowledge on—than to AP, who had no such excuse.
I would have had no factual problem with the claim ‘many experts believe’. The phrasing that I challenged, and the grounds I gave for challenging it was that ‘many experts now believe’ (emphasis mine) implies positive change over time—that the proportion of experts who believe this is increasing. That doesn’t seem anything like as self-evident as a comment about the POTUS.
Fwiw I think the rate of change (and possibly even second derivative) of expert beliefs on such a speculative and rapidly evolving subject is much more important than the absolute number or even proportion of experts with the relevant belief, especially since it’s very hard to define who even qualifies as an expert in such a field (per my comment, most of the staff at Google—and other big AI companies—could arguably qualify)
If you’d mentioned that other experts you’d spoken to had a sense that sentiment was changing (and mentioned those conversations as a pseudocitation) I would have been substantially less concerned by the point—though I do think it’s important enough to merit proper research (though such research would have probably been beyond the scope of your 80k piece), and not to imply stronger conclusions than the evidence we have merits.
I am definitely worried about selection bias—the whole concept of ‘AI safety researcher’ screams it (not that I think you only spoke to such people, but any survey which includes them and excludes ‘AI-safety-concern-rejecting researcher/developer’ seems highly likely to get prejudicial results)
I don’t understand what counterclaim you’re making here. I strongly agree that the opinions of experts are very important, hence my entire concern about this exchange!
Thanks for this! Looks like we actually roughly agree overall :)