presupposes that EAs are wrong, or at least, merely luckily right
Right, to be clear I’m far from certain that the stereotypical “EA view” is right here.
I guess really I was saying that “conditional on a sociological explanation being appropriate, I don’t think it’s as LW-driven as Yarrow thinks”, although LW is undoubtedly important.
Sure that makes a lot of sense! I was mostly just using your comment to riff on a related concept.
I think reality is often complicated and confusing, and it’s hard to separate out contingency vs inevitable stories for why people believe what they believe. But I think the correct view is that EAs’ belief on AGI probability and risk (within an order of magnitude or so) is mostly not contingent (as of the year 2025) even if it turns out to be ultimately wrong.
The Google ads example was the best example I could think of to illustrate this. I’m far from certain that Google’s decision to use ads was actually the best source of long-term revenue (never mind being morally good lol). But it still seemed like the internet as we understand it meant it was implausible that Google ads was counterfactually due to their specific acquisitions.
Similarly, even if EAs ignored AI before for some reason, and never interacted with LW or Bostrom, it’s implausible that, as of 2025, people who are concerned with ambitious, large-scale altruistic impact (and have other epistemic, cultural, and maybe demographic properties characteristic of the movement) would not think of AI as a big deal. AI is just a big thing in the world that’s growing fast. Anybody capable of reading graphs can see that.
That said, specific micro-level beliefs (and maybe macro ones) within EA and AI risk might be different without influence from either LW or the Oxford crowd. For example there might be a stronger accelerationist arm. Alternatively, people might be more queasy with the closeness with the major AI companies, and there will be a stronger and more well-funded contingent of folks interested in public messaging on pausing or stopping AI. And in general if the movement didn’t “wake up” to AI concerns at all pre-ChatGPT I think we’d be in a more confused spot.
How many angels can dance on the head on a pin? An infinite number because angels have no spatial extension? Or maybe if we assume angels have a diameter of ~1 nanometre plus ~1 additional nanometre of diameter for clearance for dancing we can come up with a ballpark figure? Or, wait, are angels closer to human-sized? When bugs die do they turn into angels? What about bacteria? Can bacteria dance? Are angels beings who were formerly mortal, or were they “born” angels?[1]
AI is just a big thing in the world that’s growing fast. Anybody capable of reading graphs can see that.
Well, some of the graphs are just made-up, like those in “Situational Awareness”, and some of the graphs are woefully misinterpreted to be about AGI when they’re clearly not, like the famous METR time horizon graph.[2] I imagine that a non-trivial amount of EA misjudgment around AGI results from a failure to correctly read and interpret graphs.
And, of course, when people like titotal examine the math behind some of these graphs, like those in AI 2027, they are sometimes found to be riddled with major mistakes.
What I said elsewhere about AGI discourse in general is true about graphs in particular: the scientifically defensible claims are generally quite narrow, caveated, and conservative. The claims that are broad, unqualified, and bold are generally not scientifically defensible. People at METR themselves caveat the time horizons graph and note its narrow scope (I cited examples of this elsewhere in the comments on this post). Conversely, graphs that attempt to make a broad, unqualified, bold claim about AGI tend to be complete nonsense.
Out of curiosity, roughly what probability would you assign to there being an AI financial bubble that pops sometime within the next five years or so? If there is an AI bubble and if it popped, how would that affect your beliefs around near-term AGI?
How is correctness physically instantiated in space and time and how does it physically cause physical events in the world, such as speaking, writing, brain activity, and so on? Is this an important question to ask in this context? Do we need to get into this?
You can take an epistemic practice in EA such as “thinking that Leopold Aschenbrenner’s graphs are correct” and ask about the historical origin of that practice without making a judgement about whether the practice is good or bad, right or wrong. You can ask the question in a form like, “How did people in EA come to accept graphs like those in ‘Situational Awareness’ as evidence?” If you want to frame it positively, you could ask the question as something like, “How did people in EA learn to accept graphs like these as evidence?” If you want to frame it negatively, you could ask, “How did people in EA not learn not to accept graphs like these as evidence?” And of course you can frame it neutrally.
The historical explanation is a separate question from the evaluation of correctness/incorrectness and the two don’t conflict with each other. By analogy, you can ask, “How did Laverne come to believe in evolution?” And you could answer, “Because it’s the correct view,” which would be right, in a sense, if a bit obtuse, or you could answer, “Because she learned about evolution in her biology classes in high school and college”, which would also be right, and which would more directly answer the question. So, a historical explanation does not necessarily imply that a view is wrong. Maybe in some contexts it insinuates it, but both kinds of answers can be true.
Do you know a source that formally makes the argument that the METR graph is about AGI? I am trying to pin down the series of logical steps that people are using to get from that graph to AGI. I would like to spell out why I think this inference is wrong, but first it would be helpful to see someone spell out the inference they’re making.
Right, to be clear I’m far from certain that the stereotypical “EA view” is right here.
Sure that makes a lot of sense! I was mostly just using your comment to riff on a related concept.
I think reality is often complicated and confusing, and it’s hard to separate out contingency vs inevitable stories for why people believe what they believe. But I think the correct view is that EAs’ belief on AGI probability and risk (within an order of magnitude or so) is mostly not contingent (as of the year 2025) even if it turns out to be ultimately wrong.
The Google ads example was the best example I could think of to illustrate this. I’m far from certain that Google’s decision to use ads was actually the best source of long-term revenue (never mind being morally good lol). But it still seemed like the internet as we understand it meant it was implausible that Google ads was counterfactually due to their specific acquisitions.
Similarly, even if EAs ignored AI before for some reason, and never interacted with LW or Bostrom, it’s implausible that, as of 2025, people who are concerned with ambitious, large-scale altruistic impact (and have other epistemic, cultural, and maybe demographic properties characteristic of the movement) would not think of AI as a big deal. AI is just a big thing in the world that’s growing fast. Anybody capable of reading graphs can see that.
That said, specific micro-level beliefs (and maybe macro ones) within EA and AI risk might be different without influence from either LW or the Oxford crowd. For example there might be a stronger accelerationist arm. Alternatively, people might be more queasy with the closeness with the major AI companies, and there will be a stronger and more well-funded contingent of folks interested in public messaging on pausing or stopping AI. And in general if the movement didn’t “wake up” to AI concerns at all pre-ChatGPT I think we’d be in a more confused spot.
How many angels can dance on the head on a pin? An infinite number because angels have no spatial extension? Or maybe if we assume angels have a diameter of ~1 nanometre plus ~1 additional nanometre of diameter for clearance for dancing we can come up with a ballpark figure? Or, wait, are angels closer to human-sized? When bugs die do they turn into angels? What about bacteria? Can bacteria dance? Are angels beings who were formerly mortal, or were they “born” angels?[1]
Well, some of the graphs are just made-up, like those in “Situational Awareness”, and some of the graphs are woefully misinterpreted to be about AGI when they’re clearly not, like the famous METR time horizon graph.[2] I imagine that a non-trivial amount of EA misjudgment around AGI results from a failure to correctly read and interpret graphs.
And, of course, when people like titotal examine the math behind some of these graphs, like those in AI 2027, they are sometimes found to be riddled with major mistakes.
What I said elsewhere about AGI discourse in general is true about graphs in particular: the scientifically defensible claims are generally quite narrow, caveated, and conservative. The claims that are broad, unqualified, and bold are generally not scientifically defensible. People at METR themselves caveat the time horizons graph and note its narrow scope (I cited examples of this elsewhere in the comments on this post). Conversely, graphs that attempt to make a broad, unqualified, bold claim about AGI tend to be complete nonsense.
Out of curiosity, roughly what probability would you assign to there being an AI financial bubble that pops sometime within the next five years or so? If there is an AI bubble and if it popped, how would that affect your beliefs around near-term AGI?
How is correctness physically instantiated in space and time and how does it physically cause physical events in the world, such as speaking, writing, brain activity, and so on? Is this an important question to ask in this context? Do we need to get into this?
You can take an epistemic practice in EA such as “thinking that Leopold Aschenbrenner’s graphs are correct” and ask about the historical origin of that practice without making a judgement about whether the practice is good or bad, right or wrong. You can ask the question in a form like, “How did people in EA come to accept graphs like those in ‘Situational Awareness’ as evidence?” If you want to frame it positively, you could ask the question as something like, “How did people in EA learn to accept graphs like these as evidence?” If you want to frame it negatively, you could ask, “How did people in EA not learn not to accept graphs like these as evidence?” And of course you can frame it neutrally.
The historical explanation is a separate question from the evaluation of correctness/incorrectness and the two don’t conflict with each other. By analogy, you can ask, “How did Laverne come to believe in evolution?” And you could answer, “Because it’s the correct view,” which would be right, in a sense, if a bit obtuse, or you could answer, “Because she learned about evolution in her biology classes in high school and college”, which would also be right, and which would more directly answer the question. So, a historical explanation does not necessarily imply that a view is wrong. Maybe in some contexts it insinuates it, but both kinds of answers can be true.
But this whole diversion has been unnecessary.
Do you know a source that formally makes the argument that the METR graph is about AGI? I am trying to pin down the series of logical steps that people are using to get from that graph to AGI. I would like to spell out why I think this inference is wrong, but first it would be helpful to see someone spell out the inference they’re making.