I wish the EA community would disband and disappear and expect it to cause enormous harm in the future.
I would be curious to hear you expand more on this:
What is your confidence level? (e.g. is it similar to the confidence you had in âvery few people travel for EAGâ, or is it something like 90%?)
What scenarios are you worried about? E.g. is it more about EA hastening the singularity by continuing to help research labs, or about EA making a government-caused slowdown less likely and less effective?
What is your confidence level? (e.g. is it similar to the confidence you had in âvery few people travel for EAGâ, or is it something like 90%?)
Extremely unconfident, both in overall probability and in robustness. Itâs the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief Iâve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
What scenarios are you worried about? Hastening the singularity by continuing to help research labs or by making government intervention less like and less effective?
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the worldâs smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly âmake key decision makers better informed and help smart and moral people understand the state of the world betterâ.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also donât think I would describe most of my work straightforwardly as ârationalist community buildingâ. LessWrong is its own thing thatâs quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response hereâAlthough I canât quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.
I would be curious to hear you expand more on this:
What is your confidence level? (e.g. is it similar to the confidence you had in âvery few people travel for EAGâ, or is it something like 90%?)
What scenarios are you worried about? E.g. is it more about EA hastening the singularity by continuing to help research labs, or about EA making a government-caused slowdown less likely and less effective?
What is your main theory of change at the moment with rationalist community building, and how is it different from EA community building? Is it mostly focused on âslowing down AI progress, pivotal acts, intelligence enhancementâ?
Extremely unconfident, both in overall probability and in robustness. Itâs the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief Iâve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the worldâs smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly âmake key decision makers better informed and help smart and moral people understand the state of the world betterâ.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also donât think I would describe most of my work straightforwardly as ârationalist community buildingâ. LessWrong is its own thing thatâs quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response hereâAlthough I canât quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.