What is your confidence level? (e.g. is it similar to the confidence you had in “very few people travel for EAG”, or is it something like 90%?)
Extremely unconfident, both in overall probability and in robustness. It’s the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I’ve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
What scenarios are you worried about? Hastening the singularity by continuing to help research labs or by making government intervention less like and less effective?
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the world’s smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly “make key decision makers better informed and help smart and moral people understand the state of the world better”.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also don’t think I would describe most of my work straightforwardly as “rationalist community building”. LessWrong is its own thing that’s quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response here—Although I can’t quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.
Extremely unconfident, both in overall probability and in robustness. It’s the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I’ve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the world’s smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly “make key decision makers better informed and help smart and moral people understand the state of the world better”.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also don’t think I would describe most of my work straightforwardly as “rationalist community building”. LessWrong is its own thing that’s quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response here—Although I can’t quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.