I agree with Ben Stewart’s response that this is not a helpful thing to say. You are making some very strange and unintuitive claims. I can’t imagine how you would persuade a reasonable, skeptical, well-informed person outside the EA/LessWrong (or adjacent) bubble that these are credible claims, let alone that they are true. (Even within the EA Forum bubble, it seems like significantly more people disagree with you than agree.)
To pick on just one aspect of this claim: it is my understanding that Yudkowsky has no meaningful technical proficiency with deep learning-based or deep reinforcement learning-based AI systems. In my understanding, Yudkowsky lacks the necessary skills and knowledge to perform the role of an entry-level AI capabilities researcher or engineer at any AI company capable of paying multi-million-dollar salaries. If there is evidence that shows my understanding is mistaken, I would like to see that evidence. Otherwise, I can only conclude that you are mistaken.
I think the claim that an endorsement is worth billions of billions is also wrong, but it’s hard to disprove a claim about what would happen in the event of a strange and unlikely hypothetical. Yudkowsky, Soares, and MIRI have an outsized intellectual influence in the EA community (and obviously on LessWrong). There is some meaningful level of influence on the community of people working in the AI industry in the Bay Area, but it’s much less. Among the sort of people who could make decisions that would realize billions or tens of billions in value, namely the top-level executives at AI companies and investors, the influence seems pretty marginal. I would guess the overwhelming majority of investors either don’t know who Yudkowsky and Soares are or do but don’t care what their views are. Top-level executives do know who Yudkowsky is, but in every instance I’ve seen, they tend to be politely disdainful or dismissive toward his views on AGI and AI safety.
Anyway, this seems like a regrettably unproductive and unimportant tangent.
I think it could be a helpful response for people who are able to respond to signals of the type “someone who has demonstrably good forecasting skills, is an expert in the field, and works on this long time claims X” by at least re-evaluating if their models make sense and are not missing some important considerations.
If someone is at least able to that, they can for example ask a friendly AI or some other friendly AI and they will tell you, based on conservative estimates and reference classes, that the original claim is likely wrong. They will still miss important considerations—in a way in which typical forecaster would also do—so the results are underestimates.
I think at the level of [some combination of lack of ability to think and motivated reasoning] when people are uninterested in e.g. sanity checking their thinking with AIs, it is not worth the time correcting them. People are wrong on the internet all the time.
(I think the debate was moderately useful—I made an update from this debate & voting patterns, broadly in the direction EA Forum descending to a level of random place on the internet where confused people talk about AI and it is broadly not worth to read or engage. I’m no longer that much active on EAF, but I’ve made some update)
This thread seems to have gone in an unhelpful direction.
Questioning motivations is a hard point to make well. I’m unwilling to endorse that they are never relevant, but it immediately becomes personal. Keeping the focus primarily on the level of the arguments themselves is an approach more likely to enlighten and less likely to lead to flamewars.
I’m not here to issue a moderation warning to anyone for the conversation ending up on the point of motivations. I do want to take my moderation hat off and suggest that people spend more time on the object level.
I will then put my moderation hat back on and say that this and Jan’s previous comment breaks norms. You can disagree with someone without being this insulting.
I agree the thread direction may be unhelpful, and flame wars are bad.
I disagree though about the merits of questioning motivations, I think its super important.
In the AI sphere, there are great theoretical arguments on all sides, good arguments for accelleration, caution, pausing etc. We can discuss these ad nauseum and I do think that’s useful. But I think motivations likely shape the history and current state of AI development more than unmotivated easoning and rational thought. Money and Power are strong motivators—EA’s have sidelined them at their peril before. Although we cannot know people’s hearts, we can see and analyse what they havedone and said in the past and what motivational pressure might affect them right now.
I also think its possislbe to have a somewhat object level about motivations.
I agree with Ben Stewart’s response that this is not a helpful thing to say. You are making some very strange and unintuitive claims. I can’t imagine how you would persuade a reasonable, skeptical, well-informed person outside the EA/LessWrong (or adjacent) bubble that these are credible claims, let alone that they are true. (Even within the EA Forum bubble, it seems like significantly more people disagree with you than agree.)
To pick on just one aspect of this claim: it is my understanding that Yudkowsky has no meaningful technical proficiency with deep learning-based or deep reinforcement learning-based AI systems. In my understanding, Yudkowsky lacks the necessary skills and knowledge to perform the role of an entry-level AI capabilities researcher or engineer at any AI company capable of paying multi-million-dollar salaries. If there is evidence that shows my understanding is mistaken, I would like to see that evidence. Otherwise, I can only conclude that you are mistaken.
I think the claim that an endorsement is worth billions of billions is also wrong, but it’s hard to disprove a claim about what would happen in the event of a strange and unlikely hypothetical. Yudkowsky, Soares, and MIRI have an outsized intellectual influence in the EA community (and obviously on LessWrong). There is some meaningful level of influence on the community of people working in the AI industry in the Bay Area, but it’s much less. Among the sort of people who could make decisions that would realize billions or tens of billions in value, namely the top-level executives at AI companies and investors, the influence seems pretty marginal. I would guess the overwhelming majority of investors either don’t know who Yudkowsky and Soares are or do but don’t care what their views are. Top-level executives do know who Yudkowsky is, but in every instance I’ve seen, they tend to be politely disdainful or dismissive toward his views on AGI and AI safety.
Anyway, this seems like a regrettably unproductive and unimportant tangent.
I think it could be a helpful response for people who are able to respond to signals of the type “someone who has demonstrably good forecasting skills, is an expert in the field, and works on this long time claims X” by at least re-evaluating if their models make sense and are not missing some important considerations.
If someone is at least able to that, they can for example ask a friendly AI or some other friendly AI and they will tell you, based on conservative estimates and reference classes, that the original claim is likely wrong. They will still miss important considerations—in a way in which typical forecaster would also do—so the results are underestimates.
I think at the level of [some combination of lack of ability to think and motivated reasoning] when people are uninterested in e.g. sanity checking their thinking with AIs, it is not worth the time correcting them. People are wrong on the internet all the time.
(I think the debate was moderately useful—I made an update from this debate & voting patterns, broadly in the direction EA Forum descending to a level of random place on the internet where confused people talk about AI and it is broadly not worth to read or engage. I’m no longer that much active on EAF, but I’ve made some update)
This thread seems to have gone in an unhelpful direction.
Questioning motivations is a hard point to make well. I’m unwilling to endorse that they are never relevant, but it immediately becomes personal. Keeping the focus primarily on the level of the arguments themselves is an approach more likely to enlighten and less likely to lead to flamewars.
I’m not here to issue a moderation warning to anyone for the conversation ending up on the point of motivations. I do want to take my moderation hat off and suggest that people spend more time on the object level.
I will then put my moderation hat back on and say that this and Jan’s previous comment breaks norms. You can disagree with someone without being this insulting.
I agree the thread direction may be unhelpful, and flame wars are bad.
I disagree though about the merits of questioning motivations, I think its super important.
In the AI sphere, there are great theoretical arguments on all sides, good arguments for accelleration, caution, pausing etc. We can discuss these ad nauseum and I do think that’s useful. But I think motivations likely shape the history and current state of AI development more than unmotivated easoning and rational thought. Money and Power are strong motivators—EA’s have sidelined them at their peril before. Although we cannot know people’s hearts, we can see and analyse what they have done and said in the past and what motivational pressure might affect them right now.
I also think its possislbe to have a somewhat object level about motivations.
I think this article on the history of Modern AI outlines some of this well https://substack.com/home/post/p-185759007
I might write more about this later...