This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like “influential leaders” or saying that the claims “often guide EA decision-making” it is easy to forget that.
The term “background claims” might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of “influential leaders”
Do not take these opinions as fact. Take none for granted. Interrogate them all.
“Influential leaders” are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.
If you can, don’t take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.
I’ve mostly lived in Oxford and London, and these claims fit with my experience of the hubs there as well. I’ve perhaps experienced Oxford as having a little less focus on AI than #2 indicates.
While I agree the claims should be interrogated and that the ‘influential leaders’ are very fallible, I think the only way to interrogate them properly is to be able to publicly acknowledge that these are indeed background assumptions held by a lot of the people with power/influence in the community. I don’t see this post as stating ‘these are background claims which you should hold without interrogation’ but rather ‘these are in fact largely treated as background claims within the EA communities at the core hubs in the Bay, London and Oxford etc.’. This seems very important for people not in these hubs to know, so they can accurately decide e.g. whether they are interested in participating more in the the movement, whether to follow the advice coming from these places, or what frames to use when applying for funding. Ideally I’d like to see a much longer list of background assumptions like this, because I think there are many more that are difficult to spot if you have not been in a hub.
However, the post seemed less self-aware to me than you are implying. My impression from interacting with undergraduates especially, many of whom read this forum, is that “these cool people believe this” is often read as “you should believe this.” (Edit: by this paragraph I don’t mean that this post is trying to say this, rather that it doesn’t seem aware of how it could play into that dynamic.)
Thus I think it’s always good practice for these sorts of posts to remind readers of the sort of thing that I commented, especially when using terms like “influential leaders” and “background claims.” Not because it invalidates the information value of the post, but because not including it risks contributing to a real problem.
I didn’t personally feel the post did that, hence my comment.
In addition, I do wish it were more specific about particularly which people it’s referring to, rather than some amorphous and ill-defined group.
I felt this way reading the post as well “many of the most influential EA leaders” and “many EA leaders” and feels overly vague and implicitly normative. Perhaps as a constructive suggestion, we could attempt to list which leaders you mean?
Regarding 10% chance or greater of human extinction, here are the people I can think of who have expressed something like this view:
Toby Ord
Will MacAskill
80k leadership
OpenPhil leadership
Regarding “primarily concerned with AI safety”, it’s not clear to me whether this is in contrast to the x-risk portfolio approach that most funders like OpenPhil and FTX and career advisors like 80k are nonetheless taking. If you mean something like “most concerned about AI safety” or “most prioritize AI safety”, then this feels accurate of the above list of people.
To the extent possible, I think it’d be especially helpful to list the several people or institutions who believe in 50% chance of extinction, or who estimate AGI in 10 years vs 30 years vs 50 years, and what kind of influence they have.
+1 on questioning/interrogating opinions, even opinions of people who are “influential leaders.”
I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves
My hope is that readers don’t come away with “here is the set of opinions I am supposed to believe” but rather “ah here is a set of opinions that help me understand how some EAs are thinking about the world.” Thank you for making this distinction explicit.
Disagree that these are mostly characterizing the Berkeley community (#1 and #2 seem the most Berkeley-specific, though I think they’re shaping EA culture/funding/strategy enough to be considered background claims. I think the rest are not Berkeley-specific).
I appreciated the part where you asked people to evaluate organizations by themselves. But it was in the context of “there are organizations that aren’t very good, but people don’t want to say they are failing,” which to me implies that a good way to do this is to get people “in the know” to tell you if they are the failing ones or not. It implies there is some sort of secret consensus on what is failing and what isn’t, and if not for the fact that people are afraid to voice their views you could clearly know which were “failing.” This could be partially true! But it is not how I would motivate the essential idea of thinking for yourself.
The reason to think for yourself here is because lots of people are likely to be wrong, many people disagree, and the best thing we can do here is have more people exercising their own judgement. Not because unfortunately some people don’t want to voice some of their opinions publicly.
I am not sure what you mean by “EA strategy”. You mention funding, and I think it is fair to say that a lot of funding decisions are shaped by Berkeley ideas (but this is less clear to me regarding FTX regrantors). But I argue the following have many “Berkeley” assumptions baked in: (1), (2), (3 - the idea that the most important conversations are conversations between EAs is baked into this), (4 - the idea that there exists some kind of secret consensus, the idea that this sort of thing is nearly always fat tail distributed is uncontroversial), (5 - “many” does a lot of work here, but I think most of the organizations you’re talking about are in the area), (8- the idea that AI safety specific mentors are the best way to start getting into AI safety, not, say, ML mentors), (10 - leaving out published papers, arxiv).
I’m not saying that all of these ideas are wrong, just that they actually aren’t accepted by some outside that community.
I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.
feels a little bit icky to me. That there are many people who get introduced to EA through very different ways and learn about it on their own or via people who aren’t very socially influenced by the Berkeley community is an asset. One way to destroy a lot of the benefit of geographic diversity would be to get everyone promising to hang out in Berkeley and then have their worldview be shaped by that.
I rather think it’s sound advice, and have often given it myself. Besides it being, in my judgement, good from an impact point of view, I also guess that it has direct personal benefits for the advisee to figure out how people at hubs are thinking. It seems quite commonsensical advice to me, and I would guess that people in other movements give analogous advice.
I agree that all else equal, it’s highly useful to know what people at hubs are thinking, because they might have great ideas, influence funding, etc.
However, I think a charitable interpretation of that comment is that it is referring to the fact that we are not perfect reasoners, and inevitably may start to agree with people we think are cool and/or have money to give us. So in some ways, it might be good to have people not even be exposed to the ideas, to allow their own uncorrelated ideas to run their course in whatever place they are. Their uncorrelated ideas are likely to be worse, but if there are enough people like this, then new and better ideas may be allowed to develop that otherwise wouldn’t have been.
I used the word “icky” to mean “this makes me feel a bit sus because it could plausibly be harmful to push this but I’m not confident it is wrong”. I also think it is mostly harmful to push it to young people who are newly excited about EA and haven’t had the space to figure out their own thoughts on deferring, status, epistemics, cause prio etc.
I don’t think the OP said anything about a Berkeley EA hub specifically? (Indeed, #3 talks about EA hubs, so Akash is clearly not referring to any particular hub.) Personally, when I read the sentence you quoted I nodded in agreement, because it resonates with my experience living both in places with lots of EAs (Oxford, Nassau) and in places with very few EAs (Buenos Aires, Tokyo, etc.), and noticing the difference this makes. I never lived in Berkeley and don’t interact much with people from that hub.
I think there’s probably not that much we’d disagree on about what people should be doing and my comment was more of a “feeling/intuitions/vague uncomfortableness” thing rather than anything well-thought out because of a few reasons I might flesh out into something more coherent at some point in the future.
This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like “influential leaders” or saying that the claims “often guide EA decision-making” it is easy to forget that.
The term “background claims” might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of “influential leaders”
Do not take these opinions as fact. Take none for granted. Interrogate them all.
“Influential leaders” are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.
If you can, don’t take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.
I’ve mostly lived in Oxford and London, and these claims fit with my experience of the hubs there as well. I’ve perhaps experienced Oxford as having a little less focus on AI than #2 indicates.
While I agree the claims should be interrogated and that the ‘influential leaders’ are very fallible, I think the only way to interrogate them properly is to be able to publicly acknowledge that these are indeed background assumptions held by a lot of the people with power/influence in the community. I don’t see this post as stating ‘these are background claims which you should hold without interrogation’ but rather ‘these are in fact largely treated as background claims within the EA communities at the core hubs in the Bay, London and Oxford etc.’. This seems very important for people not in these hubs to know, so they can accurately decide e.g. whether they are interested in participating more in the the movement, whether to follow the advice coming from these places, or what frames to use when applying for funding. Ideally I’d like to see a much longer list of background assumptions like this, because I think there are many more that are difficult to spot if you have not been in a hub.
I agree with most of what you are saying.
However, the post seemed less self-aware to me than you are implying. My impression from interacting with undergraduates especially, many of whom read this forum, is that “these cool people believe this” is often read as “you should believe this.” (Edit: by this paragraph I don’t mean that this post is trying to say this, rather that it doesn’t seem aware of how it could play into that dynamic.)
Thus I think it’s always good practice for these sorts of posts to remind readers of the sort of thing that I commented, especially when using terms like “influential leaders” and “background claims.” Not because it invalidates the information value of the post, but because not including it risks contributing to a real problem.
I didn’t personally feel the post did that, hence my comment.
In addition, I do wish it were more specific about particularly which people it’s referring to, rather than some amorphous and ill-defined group.
I felt this way reading the post as well “many of the most influential EA leaders” and “many EA leaders” and feels overly vague and implicitly normative. Perhaps as a constructive suggestion, we could attempt to list which leaders you mean?
Regarding 10% chance or greater of human extinction, here are the people I can think of who have expressed something like this view:
Toby Ord
Will MacAskill
80k leadership
OpenPhil leadership
Regarding “primarily concerned with AI safety”, it’s not clear to me whether this is in contrast to the x-risk portfolio approach that most funders like OpenPhil and FTX and career advisors like 80k are nonetheless taking. If you mean something like “most concerned about AI safety” or “most prioritize AI safety”, then this feels accurate of the above list of people.
To the extent possible, I think it’d be especially helpful to list the several people or institutions who believe in 50% chance of extinction, or who estimate AGI in 10 years vs 30 years vs 50 years, and what kind of influence they have.
+1 on questioning/interrogating opinions, even opinions of people who are “influential leaders.”
My hope is that readers don’t come away with “here is the set of opinions I am supposed to believe” but rather “ah here is a set of opinions that help me understand how some EAs are thinking about the world.” Thank you for making this distinction explicit.
Disagree that these are mostly characterizing the Berkeley community (#1 and #2 seem the most Berkeley-specific, though I think they’re shaping EA culture/funding/strategy enough to be considered background claims. I think the rest are not Berkeley-specific).
I appreciated the part where you asked people to evaluate organizations by themselves. But it was in the context of “there are organizations that aren’t very good, but people don’t want to say they are failing,” which to me implies that a good way to do this is to get people “in the know” to tell you if they are the failing ones or not. It implies there is some sort of secret consensus on what is failing and what isn’t, and if not for the fact that people are afraid to voice their views you could clearly know which were “failing.” This could be partially true! But it is not how I would motivate the essential idea of thinking for yourself.
The reason to think for yourself here is because lots of people are likely to be wrong, many people disagree, and the best thing we can do here is have more people exercising their own judgement. Not because unfortunately some people don’t want to voice some of their opinions publicly.
I am not sure what you mean by “EA strategy”. You mention funding, and I think it is fair to say that a lot of funding decisions are shaped by Berkeley ideas (but this is less clear to me regarding FTX regrantors). But I argue the following have many “Berkeley” assumptions baked in: (1), (2), (3 - the idea that the most important conversations are conversations between EAs is baked into this), (4 - the idea that there exists some kind of secret consensus, the idea that this sort of thing is nearly always fat tail distributed is uncontroversial), (5 - “many” does a lot of work here, but I think most of the organizations you’re talking about are in the area), (8- the idea that AI safety specific mentors are the best way to start getting into AI safety, not, say, ML mentors), (10 - leaving out published papers, arxiv).
I’m not saying that all of these ideas are wrong, just that they actually aren’t accepted by some outside that community.
For this reason, this:
feels a little bit icky to me. That there are many people who get introduced to EA through very different ways and learn about it on their own or via people who aren’t very socially influenced by the Berkeley community is an asset. One way to destroy a lot of the benefit of geographic diversity would be to get everyone promising to hang out in Berkeley and then have their worldview be shaped by that.
“Icky” feels like pretty strong language.
I rather think it’s sound advice, and have often given it myself. Besides it being, in my judgement, good from an impact point of view, I also guess that it has direct personal benefits for the advisee to figure out how people at hubs are thinking. It seems quite commonsensical advice to me, and I would guess that people in other movements give analogous advice.
I agree that all else equal, it’s highly useful to know what people at hubs are thinking, because they might have great ideas, influence funding, etc.
However, I think a charitable interpretation of that comment is that it is referring to the fact that we are not perfect reasoners, and inevitably may start to agree with people we think are cool and/or have money to give us. So in some ways, it might be good to have people not even be exposed to the ideas, to allow their own uncorrelated ideas to run their course in whatever place they are. Their uncorrelated ideas are likely to be worse, but if there are enough people like this, then new and better ideas may be allowed to develop that otherwise wouldn’t have been.
I used the word “icky” to mean “this makes me feel a bit sus because it could plausibly be harmful to push this but I’m not confident it is wrong”. I also think it is mostly harmful to push it to young people who are newly excited about EA and haven’t had the space to figure out their own thoughts on deferring, status, epistemics, cause prio etc.
I don’t think the OP said anything about a Berkeley EA hub specifically? (Indeed, #3 talks about EA hubs, so Akash is clearly not referring to any particular hub.) Personally, when I read the sentence you quoted I nodded in agreement, because it resonates with my experience living both in places with lots of EAs (Oxford, Nassau) and in places with very few EAs (Buenos Aires, Tokyo, etc.), and noticing the difference this makes. I never lived in Berkeley and don’t interact much with people from that hub.
I think there’s probably not that much we’d disagree on about what people should be doing and my comment was more of a “feeling/intuitions/vague uncomfortableness” thing rather than anything well-thought out because of a few reasons I might flesh out into something more coherent at some point in the future.