I think this sort of meta-post is directionally correct but doesn’t understand how EA solicits criticism and how the sequence of steps work in soliciting more criticism. The model produced here is more like here are a cluster of cultural norms within EA communication norms that suggests there are a lot of existing difficulty in criticising EA. But I think this abstracts away key parts of how EA interacts with criticism.
On EA soliciting criticism
EA solicits criticism either internally through red teaming (e.g. open discussion norms, disagreeability, high decoupling) or specifically contests (e.g. FTX AI Criticism, OpenPhil AI criticism, EA criticism contest, Givewell criticism contest). These contests and use of payment are going to change the type of criticism given in a few ways and ways that lead to discontent amongst both the critic and the receiver of criticism.
Firstly, EAs see contests as “skin in the game” so to speak with regards to criticisms because you are paying your own money for it. However, this is a very naive understanding of how critics interpret these prizes:
These prizes are often times so large that they are interpreted as distorting to the field overall especially academia. Academia is notorious for grad students being underpaid and fellowships being incredibly competitive. For instance, if you’re an AI researcher who cares a lot about algorithmic bias, seeing potential collaborators move to AI safety and alignment is something that you would see as a distraction and also bitter about. Thus, it is not a criticism contest it’s rather field-building at that point[1].
By virtue of being prizes that have judging panels that are EAs themselves it’s seen as a way to manipulate the overton window of criticism and create a controlled opposition. This is where I think you mistake a terminology problem with a teleological problem in the lines between criticism and more research. Taking for example the top prize that is under your definition straight forward research (I agree). However, the problem is not eliciting research (this probably is similar to how innovation prizes work in function) but that the aim of these prizes get goodharted—the author of the criticism got contacted by Givewell. The competition pool for these just become young EAs trying to applause lights[2] and in turn functionally become high paid work tasks. But notably, only the person who wins gets the prize so in turn a lot of people just got nothing for criticising EA (which makes even a time/cost calculus really bad). But there are a subset of critics who will never be hired by EA organisations nor want EA money.
These problems with the solicitation of criticism means 1-6 pre-conditions are path dependent on the way the engagement happens. It’s not a question of can people criticise in EA reasoning transparency but a question of how EA elicits that criticism out of them. Theoretically an EA could hit 1-6 but the overarching structure of outreach is look at this competition we’re running with huge money attached. Thus, I think EAs focus too much on the interpersonal social model of a lack of social cache or epistemic legibility when those flow downstream from the elephant in the room—money.
On the purpose of criticism
To defensively write and front-load this preemption: I do think EAs need to be less interactive with bad faith criticisms and recognise bad . However, I think it’s useful to understand the set point of distrust and its relationship to the lack of criticism:
EA Judo means the arguments strengthen EA but that’s not always in the best interest of the author. For instance, a lot of older EAs will often recount how their criticism landed them a job or higher esteem in the community. However, some critics do not want their criticisms to strengthen EA. One of the biggest criticisms that EAs have taken on board are about risk-tolerance and systemic change (more hits based giving and more political spending). But this criticism was from the Oxford Left in 2012. The resultant political spending in EA was FTX spending in democratic primaries that theoretically made politics even harder for the left.
In as much as there are crux-y critiques they’re often used internally for jockeying for memeplex ideas and incredible unclear (see: ConcernedEAs). There’s often a slight contradiction here in which the deepest critiques are often ones that don’t make sense from the people they come from (e.g. a lot of the more leftist critiques just make me wonder why the author doesn’t join the DSA yet the author wants to post anonymously to theoretically take an EA job). A lot of these critiques have the undercurrent of EA should do my pet project and imagine EA resources without EA thinking. But also sets up a failure mode of people not understanding “EAs love criticism” is not equivalent to “EAs will change their mind because of the criticism”.
I think I’ll get a lot of disagreements here but I want to clarify that EA has its own set of applause lights contextual to the community. For instance, a lot of college students in EA say they have short timelines and then when I ask what their median is it turns out to be the Bioanchor’s median (also people should just say their median this is another gripe). “Short timelines” just has become then a shorthand for “I’m hardcore and dedicated” about AI Safety.
I think this sort of meta-post is directionally correct but doesn’t understand how EA solicits criticism and how the sequence of steps work in soliciting more criticism. The model produced here is more like here are a cluster of cultural norms within EA communication norms that suggests there are a lot of existing difficulty in criticising EA. But I think this abstracts away key parts of how EA interacts with criticism.
On EA soliciting criticism
EA solicits criticism either internally through red teaming (e.g. open discussion norms, disagreeability, high decoupling) or specifically contests (e.g. FTX AI Criticism, OpenPhil AI criticism, EA criticism contest, Givewell criticism contest). These contests and use of payment are going to change the type of criticism given in a few ways and ways that lead to discontent amongst both the critic and the receiver of criticism.
Firstly, EAs see contests as “skin in the game” so to speak with regards to criticisms because you are paying your own money for it. However, this is a very naive understanding of how critics interpret these prizes:
These prizes are often times so large that they are interpreted as distorting to the field overall especially academia. Academia is notorious for grad students being underpaid and fellowships being incredibly competitive. For instance, if you’re an AI researcher who cares a lot about algorithmic bias, seeing potential collaborators move to AI safety and alignment is something that you would see as a distraction and also bitter about. Thus, it is not a criticism contest it’s rather field-building at that point[1].
By virtue of being prizes that have judging panels that are EAs themselves it’s seen as a way to manipulate the overton window of criticism and create a controlled opposition. This is where I think you mistake a terminology problem with a teleological problem in the lines between criticism and more research. Taking for example the top prize that is under your definition straight forward research (I agree). However, the problem is not eliciting research (this probably is similar to how innovation prizes work in function) but that the aim of these prizes get goodharted—the author of the criticism got contacted by Givewell. The competition pool for these just become young EAs trying to applause lights[2] and in turn functionally become high paid work tasks. But notably, only the person who wins gets the prize so in turn a lot of people just got nothing for criticising EA (which makes even a time/cost calculus really bad). But there are a subset of critics who will never be hired by EA organisations nor want EA money.
These problems with the solicitation of criticism means 1-6 pre-conditions are path dependent on the way the engagement happens. It’s not a question of can people criticise in EA reasoning transparency but a question of how EA elicits that criticism out of them. Theoretically an EA could hit 1-6 but the overarching structure of outreach is look at this competition we’re running with huge money attached. Thus, I think EAs focus too much on the interpersonal social model of a lack of social cache or epistemic legibility when those flow downstream from the elephant in the room—money.
On the purpose of criticism
To defensively write and front-load this preemption: I do think EAs need to be less interactive with bad faith criticisms and recognise bad . However, I think it’s useful to understand the set point of distrust and its relationship to the lack of criticism:
EA Judo means the arguments strengthen EA but that’s not always in the best interest of the author. For instance, a lot of older EAs will often recount how their criticism landed them a job or higher esteem in the community. However, some critics do not want their criticisms to strengthen EA. One of the biggest criticisms that EAs have taken on board are about risk-tolerance and systemic change (more hits based giving and more political spending). But this criticism was from the Oxford Left in 2012. The resultant political spending in EA was FTX spending in democratic primaries that theoretically made politics even harder for the left.
In as much as there are crux-y critiques they’re often used internally for jockeying for memeplex ideas and incredible unclear (see: ConcernedEAs). There’s often a slight contradiction here in which the deepest critiques are often ones that don’t make sense from the people they come from (e.g. a lot of the more leftist critiques just make me wonder why the author doesn’t join the DSA yet the author wants to post anonymously to theoretically take an EA job). A lot of these critiques have the undercurrent of EA should do my pet project and imagine EA resources without EA thinking. But also sets up a failure mode of people not understanding “EAs love criticism” is not equivalent to “EAs will change their mind because of the criticism”.
See this post which is the very example leftists are often scared about.
I think I’ll get a lot of disagreements here but I want to clarify that EA has its own set of applause lights contextual to the community. For instance, a lot of college students in EA say they have short timelines and then when I ask what their median is it turns out to be the Bioanchor’s median (also people should just say their median this is another gripe). “Short timelines” just has become then a shorthand for “I’m hardcore and dedicated” about AI Safety.