I think the EA communityâs belief in the EA communityâs amazing âepistemicsâ â âepistemicsâ is a made-up word, by the way, which is ironic, the very word that people in the EA community use to talk about the communityâs knowledge and critical thinking skills is itself highly questionable â is part of why people accept dubious or ridiculous arguments and evidence for imminent AGI. I think a lot of people believe the community wouldnât accept arguments or evidence that have clear holes, so if anyone points out what they see as clear holes in the arguments/âevidence, theyâre most likely just wrong.
A lot of people believe and want to continue believing that the EA community is exceptional in some big ways. Examples of this Iâve heard include the belief that people in the EA community understand important methodological issues in social science better than social scientists and caught onto these issues sooner, and generally have better thinking skills than academics or experts, and are more likely to be right than academics or experts when the community disagrees with them.
I imagine people who think like this must be resistant to arguments along the lines of âthe EA community has been making all kinds of ridiculous, obvious errors around this topic and if you just begin to scratch the surface, you start digging up more and more things that just donât make a lick of senseâ. On one hand, people may be less receptive to messages that are blunt and confronting like that. On the other hand, I donât like participating in a dynamic where people nod politely or tiptoe around things when the situation is this dire.
The vast majority of people in EA seem stable and like theyâre doing okay psychologically, but I catch some glimpses of people who seem to be losing touch with reality in a concerning way. So, the harm is not just philanthropists making some bad calls and wasting a lot of money that could have gone to help the global poor (or do something else more useful), itâs also that it seems like some people are getting harmed by these ideas in a more direct way.
The LessWrong community is a complete mess in that regard â there are the scary cults, people experiencing psychosis, a lot of paranoia and conspiratorial thinking (toward each other, toward EA, toward Silicon Valley, toward the government, toward liberalism, science, and journalism), and a lot of despair. One post that stuck out to me on LessWrong was someone who expressed their sense of hopelessness because, as they saw it, even if all the safety and alignment problems around AGI could be solved, that would still be bad because it makes the world a weird and unsettling place, where humansâ role would be unclear. The view of Eliezer Yudkowsky and other transhumanists going back to the 1990s â before Yudkowsky started worrying about the alignment stuff â was that inventing AGI would be the best thing that ever happened, and Yudkowsky wrote about how it gave him hope despite all the suffering and injustice in the world. Yudkowsky is depressed (ostensibly) because he thinks thereâs a 99.5% chance AGI would cause human extinction. Itâs worrying and sad to see people also feel despair about the AGI scenario playing out in the way that Yudkowsky was hopeful about all those years ago.
The EA community has done a much better job than LessWrong at staying grounded and stable, but I still see signs that a few people here and there are depressed, panicked, hopeless, vengeful toward those they see as enemies or âdefectorsâ, and sometimes come across as detached from reality in an eerie, unsettling way. Itâs horrible to see the human cost of bad ideas that make no sense. Probably the people who are worst affected have other psychological risk factors (that typically seems to be the case in these sorts of situations), but that doesnât mean the ideas donât make things worse.
You make a good point that practically everyone has probabilistic getouts. If you assign a probability to AGI by 2033 (or whatever) of anywhere from 10% to 90%, if 2034 rolls around and thereâs still no AGI, you can plausibly say, retrospectively, you still think you assigned the right probability to AGI. (Especially if your probability is more like 60% than 90%.)
This sort of thing makes perfect sense with something rigorous and empirical like FiveThirtyEightâs election forecast models. The difference is that FiveThirtyEight can do a post-mortem and scrutinize the model and the polls, and check things like how much the polls missed the actual vote margins. FiveThirtyEight can open source their model code, list the polls they use as inputs, publicly describe their methodology, and invite outside scrutiny. Thatâs where FiveThirtyEightâs credibility comes from. (Or came from â sadly, itâs no longer.)
In the case of AGI forecasts, there are so few opportunities to test the forecasting âmodelâ (i.e. a personâs gut intuition). One of the few pieces of external confirmation/âdisconfirmation that could mean something, i.e. whether AGI happens by the year predicted or not, is easily brushed aside. So, itâs not that the probabilistic getout is inherently illegitimate, itâs that these views are so unempirical in the first place, and this move conveniently avoids one of the few ways these views could be empirically tested.
The reason I think the AI bubble popping should surprise people (enough to hopefully motivate them to revisit their overall views on a deep level) is that the AI bubble popping seems incompatible with the story many people in EA are telling about AI capabilities. Itâs hard to square the hype over AI capabilities with the reality that there are hardly profitable applications of generative AI (profitable for the end customer), it doesnât seem to help much with workersâ productivity in most cases (coding might be an important exception, although still less so than e.g. the hype around the METR time horizons would suggest), and that not many people find chatbots useful enough to pay for a premium subscription. It seems hard to square that reality with AGI by 2033. Of course, they can always just kick the can down the road to AGI by 2038 or whatever. But if the bubble pops in 2026 or 2027 or 2028, I donât see how people could keep thinking 2033 is the right year for AGI and not push this back some.
I agree that most people who think thereâs a realistic chance of AGI killing us all before 2035 will probably just feel jubilant and relieved if an AI bubble pops. Thatâs a bit worrying to me too, since re-examining their views on a deep level would mean letting go of that good feeling. (Or â I just thought of this â maybe they might like the taste of not worrying about dying, and would invite the deeper reflection. I donât know. I think itâs hard to predict how people will think or feel about something like this.)
I think the EA communityâs belief in the EA communityâs amazing âepistemicsâ â âepistemicsâ is a made-up word, by the way, which is ironic, the very word that people in the EA community use to talk about the communityâs knowledge and critical thinking skills is itself highly questionable â is part of why people accept dubious or ridiculous arguments and evidence for imminent AGI. I think a lot of people believe the community wouldnât accept arguments or evidence that have clear holes, so if anyone points out what they see as clear holes in the arguments/âevidence, theyâre most likely just wrong.
A lot of people believe and want to continue believing that the EA community is exceptional in some big ways. Examples of this Iâve heard include the belief that people in the EA community understand important methodological issues in social science better than social scientists and caught onto these issues sooner, and generally have better thinking skills than academics or experts, and are more likely to be right than academics or experts when the community disagrees with them.
I imagine people who think like this must be resistant to arguments along the lines of âthe EA community has been making all kinds of ridiculous, obvious errors around this topic and if you just begin to scratch the surface, you start digging up more and more things that just donât make a lick of senseâ. On one hand, people may be less receptive to messages that are blunt and confronting like that. On the other hand, I donât like participating in a dynamic where people nod politely or tiptoe around things when the situation is this dire.
The vast majority of people in EA seem stable and like theyâre doing okay psychologically, but I catch some glimpses of people who seem to be losing touch with reality in a concerning way. So, the harm is not just philanthropists making some bad calls and wasting a lot of money that could have gone to help the global poor (or do something else more useful), itâs also that it seems like some people are getting harmed by these ideas in a more direct way.
The LessWrong community is a complete mess in that regard â there are the scary cults, people experiencing psychosis, a lot of paranoia and conspiratorial thinking (toward each other, toward EA, toward Silicon Valley, toward the government, toward liberalism, science, and journalism), and a lot of despair. One post that stuck out to me on LessWrong was someone who expressed their sense of hopelessness because, as they saw it, even if all the safety and alignment problems around AGI could be solved, that would still be bad because it makes the world a weird and unsettling place, where humansâ role would be unclear. The view of Eliezer Yudkowsky and other transhumanists going back to the 1990s â before Yudkowsky started worrying about the alignment stuff â was that inventing AGI would be the best thing that ever happened, and Yudkowsky wrote about how it gave him hope despite all the suffering and injustice in the world. Yudkowsky is depressed (ostensibly) because he thinks thereâs a 99.5% chance AGI would cause human extinction. Itâs worrying and sad to see people also feel despair about the AGI scenario playing out in the way that Yudkowsky was hopeful about all those years ago.
The EA community has done a much better job than LessWrong at staying grounded and stable, but I still see signs that a few people here and there are depressed, panicked, hopeless, vengeful toward those they see as enemies or âdefectorsâ, and sometimes come across as detached from reality in an eerie, unsettling way. Itâs horrible to see the human cost of bad ideas that make no sense. Probably the people who are worst affected have other psychological risk factors (that typically seems to be the case in these sorts of situations), but that doesnât mean the ideas donât make things worse.
You make a good point that practically everyone has probabilistic getouts. If you assign a probability to AGI by 2033 (or whatever) of anywhere from 10% to 90%, if 2034 rolls around and thereâs still no AGI, you can plausibly say, retrospectively, you still think you assigned the right probability to AGI. (Especially if your probability is more like 60% than 90%.)
This sort of thing makes perfect sense with something rigorous and empirical like FiveThirtyEightâs election forecast models. The difference is that FiveThirtyEight can do a post-mortem and scrutinize the model and the polls, and check things like how much the polls missed the actual vote margins. FiveThirtyEight can open source their model code, list the polls they use as inputs, publicly describe their methodology, and invite outside scrutiny. Thatâs where FiveThirtyEightâs credibility comes from. (Or came from â sadly, itâs no longer.)
In the case of AGI forecasts, there are so few opportunities to test the forecasting âmodelâ (i.e. a personâs gut intuition). One of the few pieces of external confirmation/âdisconfirmation that could mean something, i.e. whether AGI happens by the year predicted or not, is easily brushed aside. So, itâs not that the probabilistic getout is inherently illegitimate, itâs that these views are so unempirical in the first place, and this move conveniently avoids one of the few ways these views could be empirically tested.
The reason I think the AI bubble popping should surprise people (enough to hopefully motivate them to revisit their overall views on a deep level) is that the AI bubble popping seems incompatible with the story many people in EA are telling about AI capabilities. Itâs hard to square the hype over AI capabilities with the reality that there are hardly profitable applications of generative AI (profitable for the end customer), it doesnât seem to help much with workersâ productivity in most cases (coding might be an important exception, although still less so than e.g. the hype around the METR time horizons would suggest), and that not many people find chatbots useful enough to pay for a premium subscription. It seems hard to square that reality with AGI by 2033. Of course, they can always just kick the can down the road to AGI by 2038 or whatever. But if the bubble pops in 2026 or 2027 or 2028, I donât see how people could keep thinking 2033 is the right year for AGI and not push this back some.
I agree that most people who think thereâs a realistic chance of AGI killing us all before 2035 will probably just feel jubilant and relieved if an AI bubble pops. Thatâs a bit worrying to me too, since re-examining their views on a deep level would mean letting go of that good feeling. (Or â I just thought of this â maybe they might like the taste of not worrying about dying, and would invite the deeper reflection. I donât know. I think itâs hard to predict how people will think or feel about something like this.)