Edit: I wrote this comment hastily when I didn’t have much time today, so it may be clear or concise enough. I may return to clean it up later, especially on request, so please notify me of any parts of this comment that are hard to understand.
Thank you for writing this. I’ve tried having conversations during the last year to learn more about this and not only do people no report who they’re deferring to but when asked, they don’t answer. That’s not as bad as when someone answers with “I’ve just heard a lot of people saying this” but that doesn’t address the problem you mentioned that it might be only 10 people who they’re getting their timelines from.
I don’t live in the Bay Area and I’m not as personally well-connected to professionals in relevant fields, so most of these conversations I’ve had or seen like this are online. I understand why some might people might perceive online conversations that take longer and where nuance can get lost to be too tedious and take too much time for the value they provide. Yet why I ask is because I know I could dive deep into Metaculus forecasts or how many dozens of posts but I don’t know where to start and I don’t want to waste time. Never mind opting not to disclose a name or source of information, there are scarcely answers like “I’m too busy to get into this right now’ or a suggestion for a website for others to check and figure it out for themselves.
Of course, the arguments here apply more widely too. Whilst I think AI timelines is a particularly worrying case, being unclear if/how you’re deferring is a generally poor way of communicating. Discussions about p(doom) are another case where I suspect we could benefit from being clearer about deference.
To launch a survey is technically a good first step but the value of it may be lost if nobody else follows suit to engender better norms. I understand the feeling of urgency for the particular issue of AI timelines but the general problem you’re getting at has in my experience been a common, persistent and major problem across all aspects of EA for years.
I remember when I was following conversations more like this a few years ago in 2018 that there was some threshold for AI capabilities over a dozen people I talked to saying it would be imminently achieved. When I asked why they thought that, they said they knew a lot of smart people they trust saying it. I talked to a couple of them and they said a bunch of smart people they know were saying it and heard it from Demis Hassabis from DeepMind. I forget what it was but Hassabis was right because it happened around a year later.
What stuck with me is how almost nobody could or would explain their reasoning. Maybe there is way more value to deference as implicit trust in individuals, groups or semi-transparent processes. Yet the reason why Eliezer Yudkowsky, Ajeya Cotra or Hassabis is because they have a process. At the least, more of the alignment community would need to understand those processes instead of having faith in a few people who probably don’t want the rest of the community deferring to them that much. It appears the problem has only gotten worse.
Between those like you feeling a need to write a post like this and those like me who barely get answers when we ask questions, all the problems here seem like they could be much, much worse than you’re thinking.
I think raw intelligence, while important, is not the primary factor that explains why humanity-as-a-species is much more powerful than chimpanzees-as-a-species. Notably, humans were once much less powerful, in our hunter-gatherer days, but over time, through the gradual process of accumulating technology, knowledge, and culture, humans now possess vast productive capacities that far outstrip our ancient powers.
...
There are strong pressures—including the principle of comparative advantage, diseconomies of scale, and gains from specialization—that incentivize making economic services narrow and modular, rather than general and all-encompassing. Illustratively, a large factory where each worker specializes in their particular role will be much more productive than a factory in which each worker is trained to be a generalist, even though no one understands any particular component of the production process very well.
What is true in human economics will apply to AI services as well. This implies we should expect something like Eric Drexler’s AI perspective, which emphasizes economic production across many agents who trade and produce narrow services, as opposed to monolithic agents that command and control.
And I updated to faster and sooner timelines from a combination of 1) noticing some potential quick improvements in AI capabilities and feeling like there could be more similar stuff in this direction, plus 2) having heard several people say they think AI is soon because (I’m inferring their “because” here) they think the innovation frontier is fructiferous (Eliezer, and conversation+tweets from Max). I am likely forgetting some people here.
While I read the Sequences in 2014-15, I did feel like the model for unfriendly AI made sense, but I was mostly deferring to Eliezer on it because I had noticed how much smarter he was than me on these things.
First of all, thank you for reporting who you’ve deferred to in different ways in specific terms. Second, thank you for putting in some extra effort to not only name this or that person but make your reasoning more transparent and legible.
I respect Matthew because when I read what he writes, it feels like I tend to agree with half of the points he makes and disagree with the other half. It’s what makes him interesting. While some are more real and some are only perceived, there are barriers to posting on the EA Forum, like an expectation of too high a burden of rigour, that have people post on social media or other forums off of this one when they can’t resist the urge to express a novel viewpoint to advance progress in EA. Matthew is one of the people I think of when I wish a lot of insightful people were more willing to post on the EA Forum.
I don’t agree with all of what you’ve presented from Matthew here or what you’ve said yourself. I might come back to specify which parts I agree and disagree with later when I’ve got more time. Right now, though, I just want to positively reinforce your writing a comment that is more like the kind of feedback from others I’d like to see more of in EA.
Edit: I wrote this comment hastily when I didn’t have much time today, so it may be clear or concise enough. I may return to clean it up later, especially on request, so please notify me of any parts of this comment that are hard to understand.
Thank you for writing this. I’ve tried having conversations during the last year to learn more about this and not only do people no report who they’re deferring to but when asked, they don’t answer. That’s not as bad as when someone answers with “I’ve just heard a lot of people saying this” but that doesn’t address the problem you mentioned that it might be only 10 people who they’re getting their timelines from.
I don’t live in the Bay Area and I’m not as personally well-connected to professionals in relevant fields, so most of these conversations I’ve had or seen like this are online. I understand why some might people might perceive online conversations that take longer and where nuance can get lost to be too tedious and take too much time for the value they provide. Yet why I ask is because I know I could dive deep into Metaculus forecasts or how many dozens of posts but I don’t know where to start and I don’t want to waste time. Never mind opting not to disclose a name or source of information, there are scarcely answers like “I’m too busy to get into this right now’ or a suggestion for a website for others to check and figure it out for themselves.
To launch a survey is technically a good first step but the value of it may be lost if nobody else follows suit to engender better norms. I understand the feeling of urgency for the particular issue of AI timelines but the general problem you’re getting at has in my experience been a common, persistent and major problem across all aspects of EA for years.
I remember when I was following conversations more like this a few years ago in 2018 that there was some threshold for AI capabilities over a dozen people I talked to saying it would be imminently achieved. When I asked why they thought that, they said they knew a lot of smart people they trust saying it. I talked to a couple of them and they said a bunch of smart people they know were saying it and heard it from Demis Hassabis from DeepMind. I forget what it was but Hassabis was right because it happened around a year later.
What stuck with me is how almost nobody could or would explain their reasoning. Maybe there is way more value to deference as implicit trust in individuals, groups or semi-transparent processes. Yet the reason why Eliezer Yudkowsky, Ajeya Cotra or Hassabis is because they have a process. At the least, more of the alignment community would need to understand those processes instead of having faith in a few people who probably don’t want the rest of the community deferring to them that much. It appears the problem has only gotten worse.
Between those like you feeling a need to write a post like this and those like me who barely get answers when we ask questions, all the problems here seem like they could be much, much worse than you’re thinking.
On timelines, other people I’ve most recently updated most on:
Matthew Barnett (I updated to slower timelines):
And I updated to faster and sooner timelines from a combination of 1) noticing some potential quick improvements in AI capabilities and feeling like there could be more similar stuff in this direction, plus 2) having heard several people say they think AI is soon because (I’m inferring their “because” here) they think the innovation frontier is fructiferous (Eliezer, and conversation+tweets from Max). I am likely forgetting some people here.
While I read the Sequences in 2014-15, I did feel like the model for unfriendly AI made sense, but I was mostly deferring to Eliezer on it because I had noticed how much smarter he was than me on these things.
First of all, thank you for reporting who you’ve deferred to in different ways in specific terms. Second, thank you for putting in some extra effort to not only name this or that person but make your reasoning more transparent and legible.
I respect Matthew because when I read what he writes, it feels like I tend to agree with half of the points he makes and disagree with the other half. It’s what makes him interesting. While some are more real and some are only perceived, there are barriers to posting on the EA Forum, like an expectation of too high a burden of rigour, that have people post on social media or other forums off of this one when they can’t resist the urge to express a novel viewpoint to advance progress in EA. Matthew is one of the people I think of when I wish a lot of insightful people were more willing to post on the EA Forum.
I don’t agree with all of what you’ve presented from Matthew here or what you’ve said yourself. I might come back to specify which parts I agree and disagree with later when I’ve got more time. Right now, though, I just want to positively reinforce your writing a comment that is more like the kind of feedback from others I’d like to see more of in EA.