I was short on time today and hurriedly wrote my own comment reply to Sam here before I forgot my point so it’s not concise and let me know if any of it is unclear.
Your comment also better describes a kind of problem I was trying to get at, though I’ll post again an excerpt of my testimony that dovetails with what you’re saying:
I remember when I was following conversations more like this a few years ago in 2018 that there was some threshold for AI capabilities over a dozen people I talked to saying it would be imminently achieved. When I asked why they thought that, they said they knew a lot of smart people they trust saying it. I talked to a couple of them and they said a bunch of smart people they know were saying it and heard it from Demis Hassabis from DeepMind. I forget what it was but Hassabis was right because it happened around a year later.
What stuck with me is how almost nobody could or would explain their reasoning. Maybe there is way more value to deference as implicit trust in individuals, groups or semi-transparent processes. Yet the reason why Eliezer Yudkowsky, Ajeya Cotra or Hassabis is because they have a process. At the least, more of the alignment community would need to understand those processes instead of having faith in a few people who probably don’t want the rest of the community deferring to them that much. It appears the problem has only gotten worse.
I was short on time today and hurriedly wrote my own comment reply to Sam here before I forgot my point so it’s not concise and let me know if any of it is unclear.
https://forum.effectivealtruism.org/posts/FtggfJ2oxNSN8Niix/when-reporting-ai-timelines-be-clear-who-you-re-not?commentId=M5GucobHBPKyF53sa
Your comment also better describes a kind of problem I was trying to get at, though I’ll post again an excerpt of my testimony that dovetails with what you’re saying: