I appreciated the part where you asked people to evaluate organizations by themselves. But it was in the context of “there are organizations that aren’t very good, but people don’t want to say they are failing,” which to me implies that a good way to do this is to get people “in the know” to tell you if they are the failing ones or not. It implies there is some sort of secret consensus on what is failing and what isn’t, and if not for the fact that people are afraid to voice their views you could clearly know which were “failing.” This could be partially true! But it is not how I would motivate the essential idea of thinking for yourself.
The reason to think for yourself here is because lots of people are likely to be wrong, many people disagree, and the best thing we can do here is have more people exercising their own judgement. Not because unfortunately some people don’t want to voice some of their opinions publicly.
I am not sure what you mean by “EA strategy”. You mention funding, and I think it is fair to say that a lot of funding decisions are shaped by Berkeley ideas (but this is less clear to me regarding FTX regrantors). But I argue the following have many “Berkeley” assumptions baked in: (1), (2), (3 - the idea that the most important conversations are conversations between EAs is baked into this), (4 - the idea that there exists some kind of secret consensus, the idea that this sort of thing is nearly always fat tail distributed is uncontroversial), (5 - “many” does a lot of work here, but I think most of the organizations you’re talking about are in the area), (8- the idea that AI safety specific mentors are the best way to start getting into AI safety, not, say, ML mentors), (10 - leaving out published papers, arxiv).
I’m not saying that all of these ideas are wrong, just that they actually aren’t accepted by some outside that community.
I appreciated the part where you asked people to evaluate organizations by themselves. But it was in the context of “there are organizations that aren’t very good, but people don’t want to say they are failing,” which to me implies that a good way to do this is to get people “in the know” to tell you if they are the failing ones or not. It implies there is some sort of secret consensus on what is failing and what isn’t, and if not for the fact that people are afraid to voice their views you could clearly know which were “failing.” This could be partially true! But it is not how I would motivate the essential idea of thinking for yourself.
The reason to think for yourself here is because lots of people are likely to be wrong, many people disagree, and the best thing we can do here is have more people exercising their own judgement. Not because unfortunately some people don’t want to voice some of their opinions publicly.
I am not sure what you mean by “EA strategy”. You mention funding, and I think it is fair to say that a lot of funding decisions are shaped by Berkeley ideas (but this is less clear to me regarding FTX regrantors). But I argue the following have many “Berkeley” assumptions baked in: (1), (2), (3 - the idea that the most important conversations are conversations between EAs is baked into this), (4 - the idea that there exists some kind of secret consensus, the idea that this sort of thing is nearly always fat tail distributed is uncontroversial), (5 - “many” does a lot of work here, but I think most of the organizations you’re talking about are in the area), (8- the idea that AI safety specific mentors are the best way to start getting into AI safety, not, say, ML mentors), (10 - leaving out published papers, arxiv).
I’m not saying that all of these ideas are wrong, just that they actually aren’t accepted by some outside that community.