Edit: I forgot to add, OP could have phrased this differently, saying that people with productive things to say (which I assume is what they may have meant by “better takes”) would be busier doing productive work and have less time to post here. Which I don’t necessarily buy, but let’s roll with it. Instead, they chose to focus on EA orgs in particular.
The causal reason I worded it that way is that I wrote down this list very quickly, and I’m in an office with people who work at EA orgs and would write higher quality posts than average, so it was salient, even if it’s not the only mechanism for having better things to do.
I also want to point out that “people who work at EA orgs” doesn’t imply infinite conformity. It just means they fit in at some role at some organization that is trying to maximize good and/or is funded by OpenPhil/FTX (who fund lots of things, including lots of criticism). I frequently hear minority opinions like these:
Biosecurity is more pressing than alignment due to tractability
Chickens are not conscious and can’t suffer
The best way to do alignment research is to develop a videogame as a testbed for multi-agent coordination problems
Alignment research is not as good as people think due to s-risk from near misses
Instead of trying to find AI safety talent at elite universities, we should go to remote villages in India
I should probably have been a bit more charitable in thinking why you wrote it specifically like this.
minority opinions like these:
These might be minority opinions in the sense that they have some delta from the majority opinions, but they still form a tiny cluster in opinion space together with that majority.
You don’t often hear, for example:
People who think there’s no significant risk from AI
People who think extinction is only slightly worse than a global catastrophy that kills 99.99% of the population
People who think charities are usually net negative, including those with high direct impact
Socialists
Or other categories which are about experience rather than views, like:
Psychologists
People who couldn’t afford time off to interview for an EA org
People who grew up in developing countries (the movement seems to have partial success there, but do these people work in EA orgs yet?)
The causal reason I worded it that way is that I wrote down this list very quickly, and I’m in an office with people who work at EA orgs and would write higher quality posts than average, so it was salient, even if it’s not the only mechanism for having better things to do.
I also want to point out that “people who work at EA orgs” doesn’t imply infinite conformity. It just means they fit in at some role at some organization that is trying to maximize good and/or is funded by OpenPhil/FTX (who fund lots of things, including lots of criticism). I frequently hear minority opinions like these:
Biosecurity is more pressing than alignment due to tractability
Chickens are not conscious and can’t suffer
The best way to do alignment research is to develop a videogame as a testbed for multi-agent coordination problems
Alignment research is not as good as people think due to s-risk from near misses
Instead of trying to find AI safety talent at elite universities, we should go to remote villages in India
Hi, thanks for responding!
I should probably have been a bit more charitable in thinking why you wrote it specifically like this.
These might be minority opinions in the sense that they have some delta from the majority opinions, but they still form a tiny cluster in opinion space together with that majority.
You don’t often hear, for example:
People who think there’s no significant risk from AI
People who think extinction is only slightly worse than a global catastrophy that kills 99.99% of the population
People who think charities are usually net negative, including those with high direct impact
Socialists
Or other categories which are about experience rather than views, like:
Psychologists
People who couldn’t afford time off to interview for an EA org
People who grew up in developing countries (the movement seems to have partial success there, but do these people work in EA orgs yet?)