You do address the FTX comparison (by pointing out that it wonât make funding dry up), thatâs fair. My bad.
But I do think youâre make an accusation of some epistemic impropriety that seems very different from FTXâgetting FTX wrong (by not predicting its collapse) was a catastrophe and I donât think itâs the same for AI timelines. Am I missing the point?
The point of the FTX comparison is that, in the wake of the FTX collapse, many people in EA were eager to reflect on the collapse and try to see if there were any lessons for EA. In the wake of the AI bubble popping, people in EA could either choose to reflect in a similar way, or they could choose not to. The two situations are analogous insofar as they are both financial collapses and both could lead to soul-searching. They are disanalogous insofar as the AI bubble popping wonât affect EA funding and wonât associate EA in the publicâs mind with financial crimes or a moral scandal.
Itâs possible in the wake of the AI bubble popping, nobody in EA will try to learn anything. I fear that possibility. The comparisons I made to Ray Kurzweil and Elon Musk show that it is entirely possible to avoid learning anything, even when you ought to. So, EA could go multiple different ways with this, and Iâm just saying what I hope will happen is the sort of reflection that happened post-FTX.
If the AI bubble popping wouldnât convince you that EAâs focus on near-term AGI has been a mistake â or at least convince you to start seriously reflecting on whether it has been or not â what evidence would convince you?
I think itâs also disanalogous in the sense that the EA communityâs belief in imminent AGI isnât predicated on the commercial success of various VC-funded companies in the same way as the EA communityâs belief in the inherent goodness and amazing epistemics of its community did kind of assume that half its money wasnât coming from an EA-leadership endorsed criminal who rationalized his gambling of other peopleâs money in EA terms...
The AI bubble popping (which many EAs actually want to happen) is somewhat orthogonal to the imminent AGI hypothesis;[1] the internet carried on growing after a bunch of overpromisers who misspent their capital fell by the wayside.[2] I expect that (whilst not converging on superintelligence) the same will happen with chatbots and diffusion models, and there will be plenty of scope for models to be better fit to benchmarks or for researchers to talk bots into creepier responses over the coming years.
The Singularity not happening by 2027 might be a bit of a blow for people that attached great weight to that timeline, but a lot are cautious to do that or have already given themselves probabilistic getouts. I donât think its going to happen in 2027 or ever, but if I thought differently Iâm not sure 2027 actually being the year some companies failed to convince sovereign wealth funds they were close enough to AGI to deserve a trillion would or even should have that much impact.
I do agree with the wider point that it would be nice if EAs realized that many of their own donation preferences might be shaped at least as much by personal interests and vulnerable to rhetorical tricks as normiesâ; but Iâm not sure that was the main takeaway from FTX
I think the EA communityâs belief in the EA communityâs amazing âepistemicsâ â âepistemicsâ is a made-up word, by the way, which is ironic, the very word that people in the EA community use to talk about the communityâs knowledge and critical thinking skills is itself highly questionable â is part of why people accept dubious or ridiculous arguments and evidence for imminent AGI. I think a lot of people believe the community wouldnât accept arguments or evidence that have clear holes, so if anyone points out what they see as clear holes in the arguments/âevidence, theyâre most likely just wrong.
A lot of people believe and want to continue believing that the EA community is exceptional in some big ways. Examples of this Iâve heard include the belief that people in the EA community understand important methodological issues in social science better than social scientists and caught onto these issues sooner, and generally have better thinking skills than academics or experts, and are more likely to be right than academics or experts when the community disagrees with them.
I imagine people who think like this must be resistant to arguments along the lines of âthe EA community has been making all kinds of ridiculous, obvious errors around this topic and if you just begin to scratch the surface, you start digging up more and more things that just donât make a lick of senseâ. On one hand, people may be less receptive to messages that are blunt and confronting like that. On the other hand, I donât like participating in a dynamic where people nod politely or tiptoe around things when the situation is this dire.
The vast majority of people in EA seem stable and like theyâre doing okay psychologically, but I catch some glimpses of people who seem to be losing touch with reality in a concerning way. So, the harm is not just philanthropists making some bad calls and wasting a lot of money that could have gone to help the global poor (or do something else more useful), itâs also that it seems like some people are getting harmed by these ideas in a more direct way.
The LessWrong community is a complete mess in that regard â there are the scary cults, people experiencing psychosis, a lot of paranoia and conspiratorial thinking (toward each other, toward EA, toward Silicon Valley, toward the government, toward liberalism, science, and journalism), and a lot of despair. One post that stuck out to me on LessWrong was someone who expressed their sense of hopelessness because, as they saw it, even if all the safety and alignment problems around AGI could be solved, that would still be bad because it makes the world a weird and unsettling place, where humansâ role would be unclear. The view of Eliezer Yudkowsky and other transhumanists going back to the 1990s â before Yudkowsky started worrying about the alignment stuff â was that inventing AGI would be the best thing that ever happened, and Yudkowsky wrote about how it gave him hope despite all the suffering and injustice in the world. Yudkowsky is depressed (ostensibly) because he thinks thereâs a 99.5% chance AGI would cause human extinction. Itâs worrying and sad to see people also feel despair about the AGI scenario playing out in the way that Yudkowsky was hopeful about all those years ago.
The EA community has done a much better job than LessWrong at staying grounded and stable, but I still see signs that a few people here and there are depressed, panicked, hopeless, vengeful toward those they see as enemies or âdefectorsâ, and sometimes come across as detached from reality in an eerie, unsettling way. Itâs horrible to see the human cost of bad ideas that make no sense. Probably the people who are worst affected have other psychological risk factors (that typically seems to be the case in these sorts of situations), but that doesnât mean the ideas donât make things worse.
You make a good point that practically everyone has probabilistic getouts. If you assign a probability to AGI by 2033 (or whatever) of anywhere from 10% to 90%, if 2034 rolls around and thereâs still no AGI, you can plausibly say, retrospectively, you still think you assigned the right probability to AGI. (Especially if your probability is more like 60% than 90%.)
This sort of thing makes perfect sense with something rigorous and empirical like FiveThirtyEightâs election forecast models. The difference is that FiveThirtyEight can do a post-mortem and scrutinize the model and the polls, and check things like how much the polls missed the actual vote margins. FiveThirtyEight can open source their model code, list the polls they use as inputs, publicly describe their methodology, and invite outside scrutiny. Thatâs where FiveThirtyEightâs credibility comes from. (Or came from â sadly, itâs no longer.)
In the case of AGI forecasts, there are so few opportunities to test the forecasting âmodelâ (i.e. a personâs gut intuition). One of the few pieces of external confirmation/âdisconfirmation that could mean something, i.e. whether AGI happens by the year predicted or not, is easily brushed aside. So, itâs not that the probabilistic getout is inherently illegitimate, itâs that these views are so unempirical in the first place, and this move conveniently avoids one of the few ways these views could be empirically tested.
The reason I think the AI bubble popping should surprise people (enough to hopefully motivate them to revisit their overall views on a deep level) is that the AI bubble popping seems incompatible with the story many people in EA are telling about AI capabilities. Itâs hard to square the hype over AI capabilities with the reality that there are hardly profitable applications of generative AI (profitable for the end customer), it doesnât seem to help much with workersâ productivity in most cases (coding might be an important exception, although still less so than e.g. the hype around the METR time horizons would suggest), and that not many people find chatbots useful enough to pay for a premium subscription. It seems hard to square that reality with AGI by 2033. Of course, they can always just kick the can down the road to AGI by 2038 or whatever. But if the bubble pops in 2026 or 2027 or 2028, I donât see how people could keep thinking 2033 is the right year for AGI and not push this back some.
I agree that most people who think thereâs a realistic chance of AGI killing us all before 2035 will probably just feel jubilant and relieved if an AI bubble pops. Thatâs a bit worrying to me too, since re-examining their views on a deep level would mean letting go of that good feeling. (Or â I just thought of this â maybe they might like the taste of not worrying about dying, and would invite the deeper reflection. I donât know. I think itâs hard to predict how people will think or feel about something like this.)
This is directly answered in the post. Edit: Can you explain why you donât find what is said about this in the post satisfactory?
You do address the FTX comparison (by pointing out that it wonât make funding dry up), thatâs fair. My bad.
But I do think youâre make an accusation of some epistemic impropriety that seems very different from FTXâgetting FTX wrong (by not predicting its collapse) was a catastrophe and I donât think itâs the same for AI timelines. Am I missing the point?
The point of the FTX comparison is that, in the wake of the FTX collapse, many people in EA were eager to reflect on the collapse and try to see if there were any lessons for EA. In the wake of the AI bubble popping, people in EA could either choose to reflect in a similar way, or they could choose not to. The two situations are analogous insofar as they are both financial collapses and both could lead to soul-searching. They are disanalogous insofar as the AI bubble popping wonât affect EA funding and wonât associate EA in the publicâs mind with financial crimes or a moral scandal.
Itâs possible in the wake of the AI bubble popping, nobody in EA will try to learn anything. I fear that possibility. The comparisons I made to Ray Kurzweil and Elon Musk show that it is entirely possible to avoid learning anything, even when you ought to. So, EA could go multiple different ways with this, and Iâm just saying what I hope will happen is the sort of reflection that happened post-FTX.
If the AI bubble popping wouldnât convince you that EAâs focus on near-term AGI has been a mistake â or at least convince you to start seriously reflecting on whether it has been or not â what evidence would convince you?
I think itâs also disanalogous in the sense that the EA communityâs belief in imminent AGI isnât predicated on the commercial success of various VC-funded companies in the same way as the EA communityâs belief in the inherent goodness and amazing epistemics of its community did kind of assume that half its money wasnât coming from an EA-leadership endorsed criminal who rationalized his gambling of other peopleâs money in EA terms...
The AI bubble popping (which many EAs actually want to happen) is somewhat orthogonal to the imminent AGI hypothesis;[1] the internet carried on growing after a bunch of overpromisers who misspent their capital fell by the wayside.[2] I expect that (whilst not converging on superintelligence) the same will happen with chatbots and diffusion models, and there will be plenty of scope for models to be better fit to benchmarks or for researchers to talk bots into creepier responses over the coming years.
The Singularity not happening by 2027 might be a bit of a blow for people that attached great weight to that timeline, but a lot are cautious to do that or have already given themselves probabilistic getouts. I donât think its going to happen in 2027 or ever, but if I thought differently Iâm not sure 2027 actually being the year some companies failed to convince sovereign wealth funds they were close enough to AGI to deserve a trillion would or even should have that much impact.
I do agree with the wider point that it would be nice if EAs realized that many of their own donation preferences might be shaped at least as much by personal interests and vulnerable to rhetorical tricks as normiesâ; but Iâm not sure that was the main takeaway from FTX
FWIW I hold similar views about it not being about to happen and about undue weight being placed on certain quasi-religious prophecies...
thereâs perhaps also a lesson that the internet isnât that different from circa 2000, but certain aspects of it did keep getting better...
I think the EA communityâs belief in the EA communityâs amazing âepistemicsâ â âepistemicsâ is a made-up word, by the way, which is ironic, the very word that people in the EA community use to talk about the communityâs knowledge and critical thinking skills is itself highly questionable â is part of why people accept dubious or ridiculous arguments and evidence for imminent AGI. I think a lot of people believe the community wouldnât accept arguments or evidence that have clear holes, so if anyone points out what they see as clear holes in the arguments/âevidence, theyâre most likely just wrong.
A lot of people believe and want to continue believing that the EA community is exceptional in some big ways. Examples of this Iâve heard include the belief that people in the EA community understand important methodological issues in social science better than social scientists and caught onto these issues sooner, and generally have better thinking skills than academics or experts, and are more likely to be right than academics or experts when the community disagrees with them.
I imagine people who think like this must be resistant to arguments along the lines of âthe EA community has been making all kinds of ridiculous, obvious errors around this topic and if you just begin to scratch the surface, you start digging up more and more things that just donât make a lick of senseâ. On one hand, people may be less receptive to messages that are blunt and confronting like that. On the other hand, I donât like participating in a dynamic where people nod politely or tiptoe around things when the situation is this dire.
The vast majority of people in EA seem stable and like theyâre doing okay psychologically, but I catch some glimpses of people who seem to be losing touch with reality in a concerning way. So, the harm is not just philanthropists making some bad calls and wasting a lot of money that could have gone to help the global poor (or do something else more useful), itâs also that it seems like some people are getting harmed by these ideas in a more direct way.
The LessWrong community is a complete mess in that regard â there are the scary cults, people experiencing psychosis, a lot of paranoia and conspiratorial thinking (toward each other, toward EA, toward Silicon Valley, toward the government, toward liberalism, science, and journalism), and a lot of despair. One post that stuck out to me on LessWrong was someone who expressed their sense of hopelessness because, as they saw it, even if all the safety and alignment problems around AGI could be solved, that would still be bad because it makes the world a weird and unsettling place, where humansâ role would be unclear. The view of Eliezer Yudkowsky and other transhumanists going back to the 1990s â before Yudkowsky started worrying about the alignment stuff â was that inventing AGI would be the best thing that ever happened, and Yudkowsky wrote about how it gave him hope despite all the suffering and injustice in the world. Yudkowsky is depressed (ostensibly) because he thinks thereâs a 99.5% chance AGI would cause human extinction. Itâs worrying and sad to see people also feel despair about the AGI scenario playing out in the way that Yudkowsky was hopeful about all those years ago.
The EA community has done a much better job than LessWrong at staying grounded and stable, but I still see signs that a few people here and there are depressed, panicked, hopeless, vengeful toward those they see as enemies or âdefectorsâ, and sometimes come across as detached from reality in an eerie, unsettling way. Itâs horrible to see the human cost of bad ideas that make no sense. Probably the people who are worst affected have other psychological risk factors (that typically seems to be the case in these sorts of situations), but that doesnât mean the ideas donât make things worse.
You make a good point that practically everyone has probabilistic getouts. If you assign a probability to AGI by 2033 (or whatever) of anywhere from 10% to 90%, if 2034 rolls around and thereâs still no AGI, you can plausibly say, retrospectively, you still think you assigned the right probability to AGI. (Especially if your probability is more like 60% than 90%.)
This sort of thing makes perfect sense with something rigorous and empirical like FiveThirtyEightâs election forecast models. The difference is that FiveThirtyEight can do a post-mortem and scrutinize the model and the polls, and check things like how much the polls missed the actual vote margins. FiveThirtyEight can open source their model code, list the polls they use as inputs, publicly describe their methodology, and invite outside scrutiny. Thatâs where FiveThirtyEightâs credibility comes from. (Or came from â sadly, itâs no longer.)
In the case of AGI forecasts, there are so few opportunities to test the forecasting âmodelâ (i.e. a personâs gut intuition). One of the few pieces of external confirmation/âdisconfirmation that could mean something, i.e. whether AGI happens by the year predicted or not, is easily brushed aside. So, itâs not that the probabilistic getout is inherently illegitimate, itâs that these views are so unempirical in the first place, and this move conveniently avoids one of the few ways these views could be empirically tested.
The reason I think the AI bubble popping should surprise people (enough to hopefully motivate them to revisit their overall views on a deep level) is that the AI bubble popping seems incompatible with the story many people in EA are telling about AI capabilities. Itâs hard to square the hype over AI capabilities with the reality that there are hardly profitable applications of generative AI (profitable for the end customer), it doesnât seem to help much with workersâ productivity in most cases (coding might be an important exception, although still less so than e.g. the hype around the METR time horizons would suggest), and that not many people find chatbots useful enough to pay for a premium subscription. It seems hard to square that reality with AGI by 2033. Of course, they can always just kick the can down the road to AGI by 2038 or whatever. But if the bubble pops in 2026 or 2027 or 2028, I donât see how people could keep thinking 2033 is the right year for AGI and not push this back some.
I agree that most people who think thereâs a realistic chance of AGI killing us all before 2035 will probably just feel jubilant and relieved if an AI bubble pops. Thatâs a bit worrying to me too, since re-examining their views on a deep level would mean letting go of that good feeling. (Or â I just thought of this â maybe they might like the taste of not worrying about dying, and would invite the deeper reflection. I donât know. I think itâs hard to predict how people will think or feel about something like this.)