Speaking only for ConcernedEAs, we are likely to continue remaining anonymous until costly signals are sent that making deep critiques in public will not damage one’s career/funding/social prospects within EA.
Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.
This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:
Observes the types of projects (and people) that succeed (or fail) at acquiring funding
i.e. few, if any, deep critiques or otherwise heterodox/“heretical” works
Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)
Experiences the generally claustrophobic epistemic atmosphere of EA
Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs
Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk.[23] Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.
Sees the revolving door and close social connections between key donors and main scholars in the field
Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned
If this is what is said in public (which we have witnessed multiple times), what is said in private?
Etc.
We go into more detail in the post, but the most important step is the radical viewpoint-diversifying of grantmaking and hiring decision-making bodies.
As long as the vast majority of resource-allocation decisions are made by a tiny and homogenous group of highly orthodox people, the anonymity motive will remain.
This is especially true when one of the (sometimes implicit) selection criteria for so many opportunities is percieved “value-alignment” with a very specific package of often questionable views, i.e. EA Orthodoxy.
We appreciate that influential members of the community (e.g. Buck) are concerned about the increasing amounts of anonymity, but unfortunately expressing concern and promising that there is nothing to worry about is not enough.
If we want the problem to be solved, we need to remove the factors that cause it.
Speaking only for ConcernedEAs, we are likely to continue remaining anonymous until costly signals are sent that making deep critiques in public will not damage one’s career/funding/social prospects within EA.
I’m willing to make a very costly signal to help you test your theory. Would both outcomes of the following experiment update your opinions in opposite directions? Here is my idea:
I’ll open up a grant form from EV, OP, LTFF, etc, and write a grant proposal for a project I actually want to execute. Instead of submitting it right away, I’ll post my final draft on the forum, and ask several grant makers to read it and say how much funding they’d be willing to give me. Then I’ll make another post, containing very harsh criticism of EA or prominent EA organizations or leadership, such as the aforementioned funding orgs or their top executives. (You can suggest criticisms you want me to post, otherwise I’ll aim to write up the most offensive-to-EA-orthodoxy thing I actually believe or can pass an ITT about.) Finally, I submit the grant proposal to the org I just criticized and see how much money I get.
If this gets as much funding as the reviewers estimated, this is at least weak evidence that public criticism of EA doesn’t hurt your prospects. If I get less funding, I’ll admit that public criticism of EA can damage the person making the criticism. Agreed?
Aside from the bit where you publish something you don’t believe and wouldn’t otherwise write, and how estimates ahead of time might not be that predictive of how funding actually goes, talking about how you plan to do this on the Forum means this probably doesn’t work at all. Someone at the grant making organization sees your harshly critical post, thinks “that’s a really surprising thing to come from Robi”, someone else points out that you’re doing it as an experiment and links your comment, …
Crap. I guess I should’ve posted the above comment from a burner account...
But anyway, serious reply: I thought of all of those problems already, and have several solutions for them. (For example, have someone who is not known to the grantmakers to be connected to me to do the experiment instead of me.) ConcernedEAs, would you accept this experiment if I propose a satisfactory variation, or in principle if it’s not practically workable?
epistemic and emotional status: had a brief look at your post and some comments, got the impression that 400 comments didn’t move the needle of your mind at all, which disappointed me.
I don’t understand why you think you’d like to be a part of EA, the list of orthodoxies just seem like the movement’s stated premises and goals (and yeah it’d be a problem if AMF was firing people for not being transhumanist, but I’d roll to disbelieve that something like that is actually happening). So what I’d suggest at about 65% confidence is a kind of broad “your reasons for anonymity are deep down reasons to try making a living in another philanthropic ecosystem” or a harsher heuristic I’m only about 25% confident in which is “the urge to be anonymous is a signal you don’t belong here”.
I’m glad the decentralization discourse is in the overton window (I have a few sketches and unfinished projects in the parallel/distributed epistemics space, I intermittently study a mechanism design textbook so I can take a crack at contributions in the space), but I haven’t seen a good contribution from the pro-decentralization side that came from the folks who are talking about fearing retribution for their bold views.
If ConcernedEAs posted with their real names, would you be less likely to hire them for an EA role? Even if not, would you agree that ConcernedEAs might reasonably draw that conclusion from your comment suggesting they might not belong here?
Sorry for any terseness I lack, and this may get out of scope or better placed in their original post’s comments. Keep in mind I’m not someone who’s opinion matters about this.
Plausibly, but who knows. Inclusivity failures are not an indictment. Sometimes knowing you disagree with an institution is a prediction that working together wouldn’t go super well.
As a baseline, recall that in normie capitalism hiring discrimination on alignment happens all the time. You have to at least pretend to care. Small orgs have higher standards of this “pretending to care” than large orgs (Gwern’s fermstimate of the proportion of amazon employees who “actually” care about same day delivery). Some would even say that to pull off working at a small org you have to actually care, and some would say that most EA orgs have more in common with startup than enterprise. But ConcernedEAs do care. They care enough to write a big post. So it’s looking good for them so far.
I probably converge with them on the idea that ideological purity and the accurate recitation of shibboleths is a very bad screening tool for any org. The more we have movement wide cohesion, the greater a threat this is.
So like, individual orgs should use their judgment and standards analogous to a startup avoiding hiring someone who openly doesn’t care about the customers or product. That doesn’t mean a deep amount of conformity.
So with the caveat that it depends a lot on the metric ton of variables that go into whether someone seems like a viable employee, in addition to their domain/object-level contributions/potential/expertise, and with deep and immense emphasis on all the reasons a hiring decision might not go through, I don’t think they’re disqualified from most projects. The point is that due to the nitty gritty, there may be some projects they’re disqualified from, and this is good and efficient. Rather, it would be good and efficient if they weren’t anonymous.
Speaking only for ConcernedEAs, we are likely to continue remaining anonymous until costly signals are sent that making deep critiques in public will not damage one’s career/funding/social prospects within EA.
We go into more detail in Doing EA Better, most notably here:
We go into more detail in the post, but the most important step is the radical viewpoint-diversifying of grantmaking and hiring decision-making bodies.
As long as the vast majority of resource-allocation decisions are made by a tiny and homogenous group of highly orthodox people, the anonymity motive will remain.
This is especially true when one of the (sometimes implicit) selection criteria for so many opportunities is percieved “value-alignment” with a very specific package of often questionable views, i.e. EA Orthodoxy.
We appreciate that influential members of the community (e.g. Buck) are concerned about the increasing amounts of anonymity, but unfortunately expressing concern and promising that there is nothing to worry about is not enough.
If we want the problem to be solved, we need to remove the factors that cause it.
I’m willing to make a very costly signal to help you test your theory. Would both outcomes of the following experiment update your opinions in opposite directions? Here is my idea:
I’ll open up a grant form from EV, OP, LTFF, etc, and write a grant proposal for a project I actually want to execute. Instead of submitting it right away, I’ll post my final draft on the forum, and ask several grant makers to read it and say how much funding they’d be willing to give me. Then I’ll make another post, containing very harsh criticism of EA or prominent EA organizations or leadership, such as the aforementioned funding orgs or their top executives. (You can suggest criticisms you want me to post, otherwise I’ll aim to write up the most offensive-to-EA-orthodoxy thing I actually believe or can pass an ITT about.) Finally, I submit the grant proposal to the org I just criticized and see how much money I get.
If this gets as much funding as the reviewers estimated, this is at least weak evidence that public criticism of EA doesn’t hurt your prospects. If I get less funding, I’ll admit that public criticism of EA can damage the person making the criticism. Agreed?
Aside from the bit where you publish something you don’t believe and wouldn’t otherwise write, and how estimates ahead of time might not be that predictive of how funding actually goes, talking about how you plan to do this on the Forum means this probably doesn’t work at all. Someone at the grant making organization sees your harshly critical post, thinks “that’s a really surprising thing to come from Robi”, someone else points out that you’re doing it as an experiment and links your comment, …
Crap. I guess I should’ve posted the above comment from a burner account...
But anyway, serious reply: I thought of all of those problems already, and have several solutions for them. (For example, have someone who is not known to the grantmakers to be connected to me to do the experiment instead of me.) ConcernedEAs, would you accept this experiment if I propose a satisfactory variation, or in principle if it’s not practically workable?
epistemic and emotional status: had a brief look at your post and some comments, got the impression that 400 comments didn’t move the needle of your mind at all, which disappointed me.
I don’t understand why you think you’d like to be a part of EA, the list of orthodoxies just seem like the movement’s stated premises and goals (and yeah it’d be a problem if AMF was firing people for not being transhumanist, but I’d roll to disbelieve that something like that is actually happening). So what I’d suggest at about 65% confidence is a kind of broad “your reasons for anonymity are deep down reasons to try making a living in another philanthropic ecosystem” or a harsher heuristic I’m only about 25% confident in which is “the urge to be anonymous is a signal you don’t belong here”.
I’m glad the decentralization discourse is in the overton window (I have a few sketches and unfinished projects in the parallel/distributed epistemics space, I intermittently study a mechanism design textbook so I can take a crack at contributions in the space), but I haven’t seen a good contribution from the pro-decentralization side that came from the folks who are talking about fearing retribution for their bold views.
If ConcernedEAs posted with their real names, would you be less likely to hire them for an EA role? Even if not, would you agree that ConcernedEAs might reasonably draw that conclusion from your comment suggesting they might not belong here?
Sorry for any terseness I lack, and this may get out of scope or better placed in their original post’s comments. Keep in mind I’m not someone who’s opinion matters about this.
Plausibly, but who knows. Inclusivity failures are not an indictment. Sometimes knowing you disagree with an institution is a prediction that working together wouldn’t go super well.
As a baseline, recall that in normie capitalism hiring discrimination on alignment happens all the time. You have to at least pretend to care. Small orgs have higher standards of this “pretending to care” than large orgs (Gwern’s fermstimate of the proportion of amazon employees who “actually” care about same day delivery). Some would even say that to pull off working at a small org you have to actually care, and some would say that most EA orgs have more in common with startup than enterprise. But ConcernedEAs do care. They care enough to write a big post. So it’s looking good for them so far.
I probably converge with them on the idea that ideological purity and the accurate recitation of shibboleths is a very bad screening tool for any org. The more we have movement wide cohesion, the greater a threat this is.
So like, individual orgs should use their judgment and standards analogous to a startup avoiding hiring someone who openly doesn’t care about the customers or product. That doesn’t mean a deep amount of conformity.
So with the caveat that it depends a lot on the metric ton of variables that go into whether someone seems like a viable employee, in addition to their domain/object-level contributions/potential/expertise, and with deep and immense emphasis on all the reasons a hiring decision might not go through, I don’t think they’re disqualified from most projects. The point is that due to the nitty gritty, there may be some projects they’re disqualified from, and this is good and efficient. Rather, it would be good and efficient if they weren’t anonymous.