On a personal level, it felt a bit odd to me that the LessOnline conference was held at exactly the same time as EAG. Feels like it could be a coincidence, but on the other hand this is not a coincidence because nothing is ever a coincidence. It feeds into my impression that the Bay is not very interested in what the rest of EA has to say.
It was really 90% coincidence in that Manifest and MATS basically fully determined when LessOnline would happen. I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it’s unlikely that would have shifted the overall decision (~15% for this weird counterfactual).
As Phib also says, it is the case that at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016.
at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016.
Here are some numbers from Swapcard for EAG London 2024:
Country
COUNTA of Country
United Kingdom
608
United States
196
Germany
85
Netherlands
48
France
44
Switzerland
34
India
23
Sweden
21
Canada
21
Australia
21
Norway
17
Brazil
15
Belgium
13
Philippines
12
Austria
12
Spain
11
Poland
11
Czech Republic
11
Singapore
10
Nigeria
10
Italy
10
Denmark
10
South Africa
9
Kenya
9
Finland
8
Israel
7
Hungary
7
Mexico
5
Ireland
5
Hong Kong
5
Malaysia
4
Estonia
4
China
4
Turkey
3
Taiwan
3
Romania
3
Portugal
3
New Zealand
3
Chile
3
United Arab Emirates
2
Peru
2
Luxembourg
2
Latvia
2
Indonesia
2
Ghana
2
Colombia
2
Zambia
1
Uganda
1
Thailand
1
Slovakia
1
Russia
1
Morocco
1
Japan
1
Iceland
1
Georgia
1
Egypt
1
Ecuador
1
Cambodia
1
Bulgaria
1
Botswana
1
Argentina
1
55% of attendees were not from the UK, 14% of attendees were from the US, at least based on Swapcard data
London is a particularly easy city to travel to from the rest of Europe, but that’s still like 50% more than the baseline we had in 2015/2016/2017. The most relevant numbers here would be the people who would travel all the way from the U.S. and who would overlap with people who would want to attend LessOnline. My best guess is there are around 30-40 attendees for which there was a real conflict between the two events, though it wouldn’t surprise me if that’s off by a factor of 2-3 in either direction.
“I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it’s unlikely that would have shifted the overall decision (~15% for this weird counterfactual)”
I find this comment quite discouraging that you didn’t feel sadness and hesitation about scheduling it at the same time. I would have hoped that leaders like you who organised important events like LessOnline, Manifest and MATS, that have EA heritage and connection would have at least a little interest in doing what was best for EA and community building (even without having to “invest” in it yourself) and therefore at least trying to co-ordinate with the CEA events crew.
I also think your comment partially refutes your assessment that it was “90% coincidence” that Manifest and MATS rather than EAG determined when LessOnline would be. If you care about the other 2 conferences but not much about clashes with EAG, then its hardly completely coincidence that you clashed with EAG.…
I find this comment quite discouraging that you didn’t feel sadness and hesitation about scheduling it at the same time.
I didn’t say that I didn’t feel sadness or hesitation about scheduling it at the same time. Indeed, I think my comment directly implied that I did feel some sadness or hesitation, because I used the word “more”, implying there was indeed a baseline level of sadness or hesitation that’s non-zero.
Ignoring that detail, a bit of broader commentary on why I don’t feel that sad:
I at the moment think that most EA community building is net-negative for the world. I am still here as someone trying to hold people accountable and because I have contributed to a bunch of the harm this community has caused. I am in some important sense an “EA Leader” but I don’t seem to be on good terms with most of what you would call EA leadership, and honestly, I wish the EA community would disband and disappear and expect it to cause enormous harm in the future (or more ideally I wish it would undergo substantial reform, though my guess is the ship for that has sailed, which makes me deeply sad).
I have a lot of complicated opinions about what this implies about how I should relate to stuff like event scheduling, and helping with things like the EA Forum, that I would be happy to go into sometime (though this doesn’t seem like the right place for it). I care a lot about being cooperative despite my misgivings, and will continue to coordinate with people, but I feel like you should be aware that I do not wish the EA community to grow or gain more power in the world (though I am happy to engage in trade and to avoid dumb coordination problems that lose value for all parties involved).
If you care about the other 2 conferences but not much about clashes with EAG, then its hardly completely coincidence that you clashed with EAG....
MATS and Manifest pay us hundreds of thousands of dollars. It would have been obviously reckless and value destroying to pass on either of these contracts because of a scheduling conflict with a conference on the other side of the world, and also separately, a decision that would have substantially increased the risk of future bankruptcy of my organization. I do not consider myself under an obligation to make sacrifices at this level.
I’ve considered it! My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things (and IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment expressing such sadness and ask them to resign, that really seems like a great recipe for evaporative cooling).
Also separately, I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund. I would have likely left or called for very substantial reform of the EA Infrastructure Fund, but the LTFF seems like it’s probably still overall doing good things (though I am definitely not confident).
Precommitting to not posting more in this whole thread, but I thought Habryka’s thoughts deserved a response
IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment
I think this is a fair cop.[1] I appreciate the added context you’ve added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/work on, so would appreciate your thoughts and takes here even if I suspect I’ll ending up disagreeing with diagnosis/solutions.[2]
My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things
I guess that depends on the theory of change for improving things. If it’s using your influence and standing to suggest reforms and hold people accountable, sure. If it’s asking for the community to “disband and disappear”, I don’t know. Like, I don’t know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said “I think lightcone infrastructure in its entirety should shut down and disband, and return all funds” and then made decisions about funding and work that aligned with that goal and not yours, how long should they expect to remain part of the core team?
Maybe we’re disagreeing about what we mean by the ‘EA community’ implicitly here, and I feel that sometimes the ‘EA Community’ is used as a bit of a scapegoat, but when I see takes like this I think “Why should GWWC shut down and disband because of the actions of SBF/OpenAI?”—Like I think GWWC and its members definitely count as part of the EA Community, and your opinion seems to be pretty maximal without much room for exceptions.
(Also I think it’s important to note that your own Forum use seems to have contributed to instances of evaporative cooling, so that felt a little off to me.)
I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund
This is true, but LTFF is part of EA Funds, and to me is clearly EA-run/affiliated/associated. It feels like its odd that you’re a grantmaker who decides where money to the community, from one of its most well-known and accessible funds, and you think that said community should disperse/disband/not grow/is net-negative for the world. That just seems rife for weird incentives/decisions unless, again, you’re explicitly red-teaming grant proposals and funding decisions. If you’re using it to “run interference” from the inside, to move funding away from the EA community and its causes, that feels a lot more sketchy to me.
FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.
I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again
Editing to note I‘ve now seen some of comments elsewhere
I wish the EA community would disband and disappear and expect it to cause enormous harm in the future.
I would be curious to hear you expand more on this:
What is your confidence level? (e.g. is it similar to the confidence you had in “very few people travel for EAG”, or is it something like 90%?)
What scenarios are you worried about? E.g. is it more about EA hastening the singularity by continuing to help research labs, or about EA making a government-caused slowdown less likely and less effective?
What is your confidence level? (e.g. is it similar to the confidence you had in “very few people travel for EAG”, or is it something like 90%?)
Extremely unconfident, both in overall probability and in robustness. It’s the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I’ve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
What scenarios are you worried about? Hastening the singularity by continuing to help research labs or by making government intervention less like and less effective?
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the world’s smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly “make key decision makers better informed and help smart and moral people understand the state of the world better”.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also don’t think I would describe most of my work straightforwardly as “rationalist community building”. LessWrong is its own thing that’s quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response here—Although I can’t quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.
It was really 90% coincidence in that Manifest and MATS basically fully determined when LessOnline would happen. I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it’s unlikely that would have shifted the overall decision (~15% for this weird counterfactual).
As Phib also says, it is the case that at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016.
Here are some numbers from Swapcard for EAG London 2024:
55% of attendees were not from the UK, 14% of attendees were from the US, at least based on Swapcard data
London is a particularly easy city to travel to from the rest of Europe, but that’s still like 50% more than the baseline we had in 2015/2016/2017. The most relevant numbers here would be the people who would travel all the way from the U.S. and who would overlap with people who would want to attend LessOnline. My best guess is there are around 30-40 attendees for which there was a real conflict between the two events, though it wouldn’t surprise me if that’s off by a factor of 2-3 in either direction.
Raising my hand for an even more niche category: people who likely would have attended LessOnline had their partner not been attending EAG.
Detail, but afaict there were at least five Irish participants.
Thanks! I was using old data, I updated the table.
I’m surprised there were only five
“I do think in a world where I considered myself more interested in investing in EA, or being involved in EA community building, I would have felt more sadness and hesitation about scheduling it at the same time, though I think it’s unlikely that would have shifted the overall decision (~15% for this weird counterfactual)”
I find this comment quite discouraging that you didn’t feel sadness and hesitation about scheduling it at the same time. I would have hoped that leaders like you who organised important events like LessOnline, Manifest and MATS, that have EA heritage and connection would have at least a little interest in doing what was best for EA and community building (even without having to “invest” in it yourself) and therefore at least trying to co-ordinate with the CEA events crew.
I also think your comment partially refutes your assessment that it was “90% coincidence” that Manifest and MATS rather than EAG determined when LessOnline would be. If you care about the other 2 conferences but not much about clashes with EAG, then its hardly completely coincidence that you clashed with EAG.…
I didn’t say that I didn’t feel sadness or hesitation about scheduling it at the same time. Indeed, I think my comment directly implied that I did feel some sadness or hesitation, because I used the word “more”, implying there was indeed a baseline level of sadness or hesitation that’s non-zero.
Ignoring that detail, a bit of broader commentary on why I don’t feel that sad:
I at the moment think that most EA community building is net-negative for the world. I am still here as someone trying to hold people accountable and because I have contributed to a bunch of the harm this community has caused. I am in some important sense an “EA Leader” but I don’t seem to be on good terms with most of what you would call EA leadership, and honestly, I wish the EA community would disband and disappear and expect it to cause enormous harm in the future (or more ideally I wish it would undergo substantial reform, though my guess is the ship for that has sailed, which makes me deeply sad).
I have a lot of complicated opinions about what this implies about how I should relate to stuff like event scheduling, and helping with things like the EA Forum, that I would be happy to go into sometime (though this doesn’t seem like the right place for it). I care a lot about being cooperative despite my misgivings, and will continue to coordinate with people, but I feel like you should be aware that I do not wish the EA community to grow or gain more power in the world (though I am happy to engage in trade and to avoid dumb coordination problems that lose value for all parties involved).
MATS and Manifest pay us hundreds of thousands of dollars. It would have been obviously reckless and value destroying to pass on either of these contracts because of a scheduling conflict with a conference on the other side of the world, and also separately, a decision that would have substantially increased the risk of future bankruptcy of my organization. I do not consider myself under an obligation to make sacrifices at this level.
Feels like you should resign from EA Funds grantmaking then
I’ve considered it! My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things (and IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment expressing such sadness and ask them to resign, that really seems like a great recipe for evaporative cooling).
Also separately, I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund. I would have likely left or called for very substantial reform of the EA Infrastructure Fund, but the LTFF seems like it’s probably still overall doing good things (though I am definitely not confident).
Precommitting to not posting more in this whole thread, but I thought Habryka’s thoughts deserved a response
I think this is a fair cop.[1] I appreciate the added context you’ve added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/work on, so would appreciate your thoughts and takes here even if I suspect I’ll ending up disagreeing with diagnosis/solutions.[2]
I guess that depends on the theory of change for improving things. If it’s using your influence and standing to suggest reforms and hold people accountable, sure. If it’s asking for the community to “disband and disappear”, I don’t know. Like, I don’t know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said “I think lightcone infrastructure in its entirety should shut down and disband, and return all funds” and then made decisions about funding and work that aligned with that goal and not yours, how long should they expect to remain part of the core team?
Maybe we’re disagreeing about what we mean by the ‘EA community’ implicitly here, and I feel that sometimes the ‘EA Community’ is used as a bit of a scapegoat, but when I see takes like this I think “Why should GWWC shut down and disband because of the actions of SBF/OpenAI?”—Like I think GWWC and its members definitely count as part of the EA Community, and your opinion seems to be pretty maximal without much room for exceptions.
(Also I think it’s important to note that your own Forum use seems to have contributed to instances of evaporative cooling, so that felt a little off to me.)
This is true, but LTFF is part of EA Funds, and to me is clearly EA-run/affiliated/associated. It feels like its odd that you’re a grantmaker who decides where money to the community, from one of its most well-known and accessible funds, and you think that said community should disperse/disband/not grow/is net-negative for the world. That just seems rife for weird incentives/decisions unless, again, you’re explicitly red-teaming grant proposals and funding decisions. If you’re using it to “run interference” from the inside, to move funding away from the EA community and its causes, that feels a lot more sketchy to me.
Never downvote while upset I guess
I think I’ve noted before that there’s a very large inferential difference between us, as we’re two very different people
Unless it was specifically for red-teaming
FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.
I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again
Editing to note I‘ve now seen some of comments elsewhere
I would be curious to hear you expand more on this:
What is your confidence level? (e.g. is it similar to the confidence you had in “very few people travel for EAG”, or is it something like 90%?)
What scenarios are you worried about? E.g. is it more about EA hastening the singularity by continuing to help research labs, or about EA making a government-caused slowdown less likely and less effective?
What is your main theory of change at the moment with rationalist community building, and how is it different from EA community building? Is it mostly focused on “slowing down AI progress, pivotal acts, intelligence enhancement”?
Extremely unconfident, both in overall probability and in robustness. It’s the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I’ve gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the world, and for it to be in some sense a honeypot that lots of the world’s smartest and most moral people end up getting stuck in and where they lend their credibility to bad actors (SBF and Sam Altman being the obvious examples here, and Anthropic seems like the one I am betting on will be looked back on similarly).
My key motivation is mostly “make key decision makers better informed and help smart and moral people understand the state of the world better”.
I think an attitude that promotes truth-seeking and informedness above other things is more conductive to that than EA stuff. I also don’t think I would describe most of my work straightforwardly as “rationalist community building”. LessWrong is its own thing that’s quite different from a lot of the rationality community, and is trying to do something relatively specific.
OK your initial message makes more sense given your response here—Although I can’t quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Yeah this is probably my biggest disagreement with Oli on this issue.
I presume the person doesn’t realise those events are hosted at your venue