[For context, I’m definitely in the social cluster of powerful EAs, though don’t have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven’t happened, and probably won’t happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren’t very good. And so:
people in EA roles where they could adopt these suggestions choose not to
and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.
And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You’ve laid out a long list of ways that you wish EA orgs behaved differently. You’ve also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies; I’ll refer to this group as “core org EAs” for brevity. But I don’t understand how you hope to cause EA orgs to change in these ways.
Maybe your ToC is that core org EAs read your post (and similar posts) and are intellectually persuaded by your suggestions, and adopt them.
If that’s your goal, I think you should try harder to understand why core org EAs currently don’t agree with your suggestions, and try to address their cruxes. For this ToC, “upvotes on the EA Forum” is a useless metric—all you should care about is persuading a few people who have already thought about this all a lot. I don’t think that your post here is very well optimized for this ToC.
(Note that this doesn’t mean that they think your suggestions aren’t net positive, but these people are extremely busy and have to choose to pursue only a tiny fraction of the good-seeming things (which are the best-seeming-to-them things) so demonstrating that something is net positive isn’t nearly enough.)
Also, if this is the ToC, I think your tone should be one of politely suggesting ways that someone might be able to do better work by their own lights. IMO, if some EA funder wants to fund an EA org to do things a particular way, you have no particular right to demand that they do things differently, you just have the ability to try to persuade them (and it’s their prerogative whether to listen).
For what it’s worth, I am very skeptical that this ToC will work. I personally think that this post is very unpersuasive, and I’d be very surprised if I changed my mind to agree with it in the next year, because I think the arguments it makes are weak (and I’ve been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
Maybe your ToC is that other EAs read your post and are persuaded by your suggestions, and then pressure the core org EAs to adopt some of your suggestions even though they disagree with them.
If so, you need to think about which ways EAs can actually apply pressure to the core org EAs. For example, as someone who prioritizes short-AI-timelines longtermist work over global health and development, I am not very incentivized to care about whether random GWWC members will stop associating with EA if EA orgs don’t change in some particular way. In contrast, if you convinced all the longtermist EAs that they should be very skeptical of working on longtermism until there was a redteaming process like the one you described, I’d feel seriously incentivized to work on that redteaming process. Right now, the people I want to hire mostly don’t agree with you that the redteaming process you named would be very informative; I encourage you to try to persuade them otherwise.
Also, I think you should just generally be scared that this strategy won’t work? You want core org EAs to change a bunch of things in a bunch of really specific ways, and I don’t think that you’re actually going to be able to apply pressure very accurately (for similar reasons that it’s hard for leaders of the environmentalist movement to cause very specific regulations to get passed).
(Note that I don’t think you should engage in uncooperative behavior (e.g. trying to set things up specifically so that EA orgs will experience damage unless they do a particular thing). I think it’s totally fair game to try to persuade people of things that are true because you think that that will cause those people to do better things by their own lights; I think it’s not fair game to try to persuade people of things because you want to force someone’s hand by damaging them. Happy to try to explain more about what I mean here if necessary; for what it’s worth I don’t think that this post advocates being uncooperative.)
Perhaps you think that the core org EAs think of themselves as having a duty to defer to self-identified EAs, and so if you can just persuade a majority of self-identified EAs, the core org EAs will dutifully adopt all the suggestions those self-identified EAs want.
I don’t think this is realistic–I don’t think that core EA orgs mostly think of themselves as executing on the community’s wishes, I think they (as they IMO should) think of themselves as trying to do as much good as possible (subject to the constraint of being honest and reliable etc).
I am somewhat sympathetic to the perspective that EA orgs have implied that they do think of themselves as trying to represent the will of the community, rather than just viewing the community as a vehicle via which they might accomplish some of their altruistic goals. Inasmuch as this is true, I think it’s bad behavior from these orgs. I personally try to be clear about this when I’m talking to people.
Maybe your ToC is that you’re going to start up a new set of EA orgs/projects yourself, and compete with current EA orgs on the marketplace of ideas for funding, talent, etc? (Or perhaps you hope that some reader of this post will be inspired to do this?)
I think it would be great if you did this and succeeded. I think you will fail, but inasmuch as I’m wrong it would be great if you proved me wrong, and I’d respect you for actively trying much more than I respect you for complaining that other people disagree with you.
If you wrote a post trying to persuade EA donors that they should, instead of other options, donate to an org that you started that will do many of the research projects you suggested here, I would think that it was cool and admirable that you’d done that.
For many of these suggestions, you wouldn’t even need to start orgs. E.g. you could organize/fundraise for research into “the circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought, by what criteria, and how this varies by subject/domain”.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
Maybe you don’t have any hope that anything will change, but you heuristically believe that it’s good anyway to write up lists of ways that you think other people are behaving suboptimally. For example, I have some sympathy for people who write op-eds complaining about ways that their government is making poor choices, even if they don’t have a detailed theory of change.
I think this is a fine thing to do, when you don’t have more productive ways to channel your energy. In the case of this post in particular, I feel like there are many more promising theories of change available, and I think I want to urge people who agree with it to pursue those.
Overall my main complaint about this post is that it feels like it’s fundamentally taking an unproductive stance–I feel like it’s sort of acting as if its goal is to persuade core EAs, but actually it’s just using that as an excuse to ineffectually complain or socially pressure; if it were trying to persuade, more attention would be paid to tradeoffs and cruxes. People sympathetic to the perspective in this post should either seriously attempt to persuade, or they should resort to doing things themselves instead of complaining when others don’t do those things.
(Another caveat on this comment: there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them.)
(In general, I love competition. For example, when I was on the EAIF I explicitly told some grantees that I thought that their goal should be to outcompete CEA, and I’ve told at least one person that I’d love it if they started an org that directly competes with my org.)
The response says that EA will not change “people in EA roles [will] … choose not to”, that making constructive critiques is a waste of time “[not a] productive ways to channel your energy” and that the critique should have been better “I wish that posts like this were clearer” “you should try harder” “[maybe try] politely suggesting”.
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.
Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at considering the issues raised.
I cannot think of a more dismissive or disheartening response. I think this response will actively dissuade future critiques of EA (I feel less inclined to try my had at critical writing seeing this as the top response) and as such make the community more insular and less epistemically robust. Also I think this response will make the authors of this post feel like their efforts are wasted an unheard.
I think this is a weird response to what Buck wrote. Buck also isn’t paid either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.
I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express ‘I disagree’, but ‘I don’t want to read this’.
Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be glad that Buck wrote his comment as you have better idea what people like him may think.
It’s important to understand the alternative to this comment is not Buck will write 30 page detailed response. The alternative is, in my guess, just silence.
Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I’m coming from.
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.
From my perspective, it feels like the burden of making progress in EA is substantially on the people who actually have jobs where they try to make EA go better; my take is that EA leaders are making the correct prioritization decision by spending their “time for contemplating ways to improve EA” budget mostly on other things than “reading anonymous critical EA Forum posts and engaging deeply”.
I think part of my model is that it’s totally normal for online critiques of things to not be very interesting or good, while you seem to have a strong prior that online critiques are worth engaging with in depth. Like, idk, did you know that literally anyone can make an EA Forum account and start commenting and voting? Public internet forums are famously bad; why do you believe that this one is worth engaging extensively with?
(I feel less inclined to try my had at critical writing seeing this as the top response)
I consider this a good outcome—I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).
I understand that my comment poses some risk of causing people who would have made useful criticisms feel discouraged from doing so. My current sense is that at the margin, this cost is smaller than the other benefits of my comment?
Also I think this response will make the authors of this post feel like their efforts are wasted an unheard.
Remember that I thought that their efforts were already wasted and unheard (by anyone who will realistically do anything about them); don’t blame the messenger here. I recommend instead blaming all the people who upvoted this post and who could, if they wanted to, help to implement many of the shovel-ready suggestions in this post, but who will instead choose not to do that.
I consider this a good outcome—I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).
This was a disappointing comment to read from a well-respected researcher, and negatively updates me against encouraging people to working and collaborating with you in the future, because I think it reflects a callousness as well as insensitivity towards power dynamics which I would not want to see in a manager or someone running an AI alignment organization. In my opinion, it is fair game for me to make truthful comments that cause you to feel less incentivized to write comments like this in future (though I can imagine changing my mind on this).
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is “fair game”, or a constructive way to approach what you want to say. Here is my real response:
In terms of whether it is “fair game” or not: consider some junior EA who made a comment to you, “I would prefer an EA forum without your critical writing on it”. This has basically zero implications for you. No one is going to take them seriously, unless they provide receipts and point out what they disliked. But this isn’t the case in reverse. So I think if you are someone seen to be a “powerful EA”, or someone whose opinion is taken pretty seriously, you should take significant care when making statements like this, because some people might update simply based on your views. I haven’t engaged with much of weeatquince’s work, but EA is a sufficiently small enough community that these kinds of opinions can probably a harmful impact on someone’s involvement in EA-I don’t think the disclaimers around “I no longer do grantmaking for the EAIF” are particularly reassuring on this front. For example, I imagine if Holden came and made a comment in response to someone “I find your posts unhelpful, distracting, and unpleasant. I would prefer an EA forum without your critical writing on it”, this could lead to information cascades and reputational repercussions that don’t accurately reflect weeatquince’s actual quality of work. You are not Holden, but it would be reasonable for you to expect your opinions to have sway in the EA community.
FWIW, your comment will negatively update people towards posting under their main accounts, and I think a forum environment where people feel even more inclined to make alt accounts because they are worried about reputational repercussions from someone like you coming along with a comment like “I would prefer an EA Forum without your critical writing on it” is intimidating and not ideal for community engagement. Because you haven’t provided any justification for your claim aside from Robin’s post which points at strawmanning to some extent, I don’t know what this means for my work and whether my comments will pass your bar. Why not just let other users downvote low quality comments, and if you have a particular quality bar for posts that you think the downvotes don’t capture, just filter your frontpage so you only see posts with >50 or >100 karma? If you disagree with the way people running the forum are using the karma system, or their idea for who should post and what the signal:noise ratio should be, you should take that to the EA forum folks. Because if I was a new EA member, I’d be deleting my draft posts after reading a comment like this, and find it disconcerting that I’m being encouraged to post by the mods but might bump into senior EA members who say this about my good-faith contributions.
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is “fair game”, or a constructive way to approach what you want to say. Here is my real response:
As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.
Thanks for your comment. I think your comment seems to me like it’s equivocating between two things: whether I negatively judge people for writing certain things, and whether I publicly say that I think certain content makes the EA Forum worse. In particular, I did the latter, but you’re worrying about the former.
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should, but for what it’s worth I am very quick to forgive and don’t hold long grudges. Also, it’s quite rare for me to update against someone substantially from a single piece of writing of theirs that I disliked. In general, I think people in EA worry too much about being judged negatively for saying things and underestimate how forgiving people are (especially if a year passes or if you say particularly reasonable things in the meantime).
@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of critical writing on the EA Forum and critiques of specific people (me or the OP author).
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should
I agree! But given this, I think the two things you mention often feel highly correlated, and it’s hard for people to actually know that when you make a statement like that, that there’s no negative judgement either from you, nor from other readers of your statement. It also feels a bit weird to suggest there’s no negative judgement if you also think the forum is a better place without their critical writing?
In general, I think people in EA worry too much about being judged negatively for saying things and underestimate how forgiving people are
I also agree with this, which is why I wanted to push back on your comment, because I think it would be understandable for someone to read your comment and worry more about being judged negatively, and if you think people are poorly calibrated, you should err on the side of giving people reasons to update in the right direction, instead of potentially exacerbating the misconception.
I also agree with this, which is why I wanted to push back on your comment, because I think it would be understandable for someone to read your comment and worry more about being judged negatively, and if you think people are poorly calibrated, you should err on the side of giving people reasons to update in the right direction, instead of potentially exacerbating the misconception.
I think you and Buck are saying different things:
you are saying “people in EA should worry less about being judged negatively, because they won’t be judged negatively”,
Buck is saying “people in EA should worry less about being judged negatively, because it’s not so bad to be judged negatively”.
I think these points have opposite implications about whether to post judgemental comments, and about what impact a judgemental comment should have on you.
Oh interesting-I hadn’t noticed that interpretation, thanks for pointing it out. That being said I do think it’s much easier for someone in a more established senior position, who isn’t particularly at risk of bad outcomes from negative judgements, to suggest that negative judgements are not so bad or use that as a justification for making negative judgements.
I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.
I think this is somewhat unfair. I think it is unfair to describe this OP as “unpleasant”, it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.
By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.
I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).
Thanks for your offer to receive critical feedback.
“the content/framing seems not very useful and I am sad about the effect it has on the discourse”
I think we very strongly disagree on this. I think critical posts like this have a very positive effect on discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content.
I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)
I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms.
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
I thought Buck’s comment contained useful information, but was also impolite. I can see why people in favour of these proposals would find it frustrating to read.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly.
Whilst they are busy, I’d be pretty disappointed if the core EAs didn’t read this and take the ideas seriously (ive tried tagging dome on twitter), and if you’re correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I’d be concerned about where there was places for people to get their ideas taken seriously. I’m lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren’t so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously.
Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?
I think that the class of arguments in this post deserve to be considered carefully, but I’m personally fine with having considered them in the past and decided that I’m unpersuaded by them, and I don’t think that “there is an EA Forum post with a lot of discussion” is a strong enough signal that I should take the time to re-evaluate a bunch—the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.
(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)
I’d be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good)
A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don’t think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but I think your original replied has essentially tried to play the man and not the ball, and I would expect better from a self-identified ‘central EA’ (not saying this is some massive failing, and I’m sure I’ve done similar myself a few times)
I interpreted Buck’s comment differently. His comment reads to me, not so much like “playing the man,” and more like “telling the man that he might be better off playing a different game.” If someone doesn’t have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I’d guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.
Maybe your correct, and that’s definitely how I interpreted it initially, but Buck’s response to me gave a different impression. Maybe I’m wrong, but it just strikes me as a little strange if Buck feels they have considered these ideas and basically rejects them, why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better. Maybe I’m wrong or have misinterpreted something though, I wouldn’t be surprised
why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better
My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others’ thinking (“inasmuch as I’m wrong it would be great if you proved me wrong”). In other words, I’d guess he was like, “I think you’re probably mistaken, but in case you’re right, it’d be in both of our interests for you to convince me of that, and you’ll only be able to do that if you take a different approach.”
[Edit: This is less clear to me now—see Gideon’s reply pointing out a more recent comment.]
I guess I’m a bit skeptical of this, given that Buck has said this to weeatquince “I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future”.
I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post...
I interpret this quote to be saying, “this style of criticism — which seems to lack a ToC and especially fails to engage with the cruxes its critics have, which feels much closer to shouting into the void than making progress on existing disagreements — is bad for the forum discourse by my lights. And it’s fine for me to dissuade people from writing content which hurts discourse”
Buck’s top-level comment is gesturing at a “How to productively criticize EA via a forum post, according to Buck”, and I think it’s noble to explain this to somebody even if you don’t think their proposals are good. I think the discourse around the EA community and criticisms would be significantly better if everybody read Buck’s top level comment, and I plan on making it the reference I send to people on the topic.
Personally I disagree with many of the proposals in this post and I also wish the people writing it had a better ToC, especially one that helps make progress on the disagreement, e.g., by commissioning a research project to better understand a relevant consideration, or by steelmanning existing positions held by people like me, with the intent to identify the best arguments for both sides.
My interpretation of Buck’s comment is that he’s saying that, insofar as he’s read the post, he sees that it’s largely full of ideas that he’s specifically considered and dismissed in the past, although he is not confident that he’s correct in every particular.
I think that the class of arguments in this post deserve to be considered carefully, but I’m personally fine with having considered them in the past and decided that I’m unpersuaded by them...
there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them
You want him to explain why he dismissed them in the past
I’d be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.
And are confused about why he’d encourage other people to champion the ideas he disagrees with
why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better
I think the explanation is that Buck is pretty pessimistic that these are by and large good ideas, enough not to commit more of his time to considering each one individually more than he has in the past. However, he sees that the authors are thinking about them a lot right now, and is inviting them to compete or collaborate effectively—to put these ideas to a real test of persuasion and execution. That seems far from “poo-poohing” to me. It’s a piece of thoughtful corrective feedback.
You have asked Buck to “lay out in depth” his reasons for rejecting all the content in this post. That seems like a big ask to me, particularly given that he does not think they are good ideas. It would be like asking an evolutionary biologist to “lay out in depth” their reasons for rejecting all the arguments in Of Pandas and People. Or, for a personal example, I went to the AAAS conference right before COVID hit, and got to enjoy the spectacle of a climate change denier getting up and asking in front of the ballroom whether the geoengineering scientist who’d been speaking whether scientists had considered the possibility that the Earth is warming up because it’s getting closer to the sun. His response was “YES WE’VE CONSIDERED IT.”
If that question asker went home, wrote a whole book full of reasons why the Earth might be moving closer to the sun, posted it online, and it got a bunch of upvotes, I don’t think that means that suddenly the scientist needs to consider all of the arguments more closely, revisit the issue, or that rejecting the ideas gives one an obligation to explain all of one’s reasons.
One way you could address this problem is by choosing one specific argument from this post that you find most compelling, and seeing if you can invite Buck into a debate on that topic, or to explain his thinking on it. I often find that to be productive of good conversation. But your comment read to me as an attempt to both mischaracterize the tone of Buck’s comment and and call into question the degree to which he’s thought about these issues. If you are accusing him of not actually having given these ideas as much thought as he claims, I think you should come right out and say it.
I agree with the text of your comment but think it’d be better if you chose your analogy to be about things that are more contested (rather than clearly false like creationism or AGW denial or whatever).
This avoids the connotation that Buck is clearly right to dismiss such criticisms.
One better analogy that comes to mind is asking Catholic theologians about the implausibility of a virgin birth, but unfortunately, I think religious connotations have their own problems.
I agree that this would have been better, but it was the example that came to mind and I’m going to trust readers to take it as a loose analogy, not a claim about which side is correct in the debate.
Fair! I think having maximally accurate analogies that helps people be truth-seeking is hard, and of course the opportunity costs of maximally cooperative writing is high.
I took the time to read through and post where I agree and disagree, however, I understand why people might not have wanted to spend the time given that the document didn’t really try to engage very hard with the reasons for not implementing these proposals. I feel bad saying that because the authors clearly put a lot of time and effort into it, but I honestly think it would have been better if the group had chosen a narrower scope and focused on making a persuasive argument for that. And then maybe worked on next section after that.
But who knows? There seems to be a bit of energy around this post, so maybe something comes out of this regardless.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly.
I think you’re right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of “please, core EA orgs, start telling people that they should be different in these ways” rather than “here is my argument for why people should be different in these ways”).
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact.
Obviously it’s the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I’m just noting that it seems unlikely to me that this post will actually persuade EA orgs to do things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.
If that’s your goal, I think you should try harder to understand why core org EAs currently don’t agree with your suggestions, and try to address their cruxes. For this ToC, “upvotes on the EA Forum” is a useless metric—all you should care about is persuading a few people who have already thought about this all a lot. I don’t think that your post here is very well optimized for this ToC.
… I think the arguments it makes are weak (and I’ve been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn’t there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes.
As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.
I think Lark’s response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal “OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman”. This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.
I don’t think the post properly engages with the question “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”. I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn’t contribute much novel insight.
I encourage people to write posts on the topic of “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”; perhaps such posts could look at historical examples, or mechanisms via which powerful people can get the echo-chamber-reduction effects without the random-people-now-use-your-resources-to-do-their-random-goals effect.
Some things that I might come to regret about my comment:
I think it’s plausible that it’s bad for me to refer to disagreeing with arguments without explaining why.
I’ve realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I’m less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
I was not very transparent about my goal with this comment, which is generally a bad sign. My main goal was to argue that posts like this are a kind of unhealthy way of engaging with EA, and that readers should be more inclined to respond with “so why aren’t you doing anything” when they read such criticisms.
I strongly disagree with this response, and find it bizarre.
I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.
I agree with freedomandutility’s description of this as an “isolated demand for [something like] rigor”.
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
Unless I am very high up and in the core EA group, I am unlikely to be listened to
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
I think that’s particularly true of some of the calls for democratization. The Cynic’s Golden Rule (“He who has the gold, makes the rules”) has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren’t happy with the idea of random EAs spending their money, it just isn’t going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn’t be—someone is going to take the donor’s money in almost all cases, and there’s no EA High Council to somehow cast the rebel grantee from the movement.
Speaking as a moderate reform advocate, the flipside of this is that the EA community has to acknowledge the origin of power and not assume that the ecosystem is somehow immune to the Cynic’s Golden Rule. The people with power and influence in 2023 may (or may not) be wise and virtuous, but they are not in power (directly) because they are wise and virtuous. They have power and influence in large part because it has been granted to them by Moskovitz and Tuna (or their delegates, or by others with power to move funding and other resources). If Moskovitz and Tuna decided to fire Open Phil tomorrow and make all their spending decisions based on my personal recommendations, I would become immensely powerful and influential within EA irrespective of how wise and virtuous I may be. (If they are reading, this would be a terrible idea!!)
“If elites haven’t already thought of/decided to implement these ideas, they’re probably not very good. I won’t explain why. ”
“Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won’t explain why, but I will be patronising.”
“Meaningful organisational change comes from the top down, and you should be more polite in requesting it. I doubt it’ll do anything, though.”
Do you see any similarities between your response here and the problems highlighted by the original post, Buck?
The tone policing, dismissing criticism out of hand, lack of any real object-level engagement, pretending community responsibility doesn’t exist, and patronisingly trying to shut down others is exactly the kind of chilling effect that this post is drawing attention to.
The fact that a comment from a senior community member has led to deference from other community members, leading to it becoming the top-voted comment, is not a surprise. But support for such weak critiques (using vague dismissals that things are ‘likely net-negative’ or just stating his own opinion with little to no justifications) is pretty low, however.
And the wording is so patronising and impolite, too. What a perfect case study in the kinds of behaviours EA should no longer tolerate.
Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!
May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.
One irony is that it’s often not that hard to change EA orgs’ minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]
(CEA is actually doing basically this experiment soon, and I’m >2/3 chance the results will change the front page somehow, though obviously it’s hard to predict the results of experiments in advance.)
If anyone reading this actually wants to do this experiment please DM me – I have various ideas for what might be useful and it’s probably good to coordinate so we don’t duplicate work
Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc. People in different roles have different abilities (and limitations) in moving a reform effort forward.
I think “I didn’t walk away with a clear sense of what someone like me should do if I agree with much/all of your critique” is helpful/friendly feedback. I hesitant to even mention it because the authors have put so much (unpaid!) work into this post already, and I don’t want to burden them with what could feel like the expectation of even more work. But I think it’s still worth making the point for future reference if for no other reason.
I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.
I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
For context, I’m definitely in the social cluster of powerful EAs, though don’t have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I arguably had more power when I was actively grantmaking on the EAIF but I no longer do this.
Can you clarify this statement? I’m confused about a couple of things:
Why is only “arguable” that you had more power when you were an active grantmaker?
Do you mean you don’t have much power, or that you don’t use much power?
Why is only “arguable” that you had more power when you were an active grantmaker?
I removed “arguable” from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn’t clearly mean I had “that much” power—e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending.
Do you mean you don’t have much power, or that you don’t use much power?
I mean that I don’t have much discretionary power (except inside Redwood). I can’t unilaterally make many choices about e.g. EA resource allocation. Most of my influence comes via arguing that other people should do things with discretionary power that they have. If other people decided to stop listening to me or funding me, I wouldn’t have much recourse.
It sounds to me that what you’re saying is that you don’t have any formal power over non-Redwood decisions, and most of your power comes from your ability to influence people. Furthermore, this power can be taken away from you without you having any choice in the matter. That seems fair enough. But then you seem to believe that this means you don’t actually have much power? That seems wrong to me. Am I misunderstanding something?
[For context, I’m definitely in the social cluster of powerful EAs, though don’t have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven’t happened, and probably won’t happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren’t very good. And so:
people in EA roles where they could adopt these suggestions choose not to
and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.
And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You’ve laid out a long list of ways that you wish EA orgs behaved differently. You’ve also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies; I’ll refer to this group as “core org EAs” for brevity. But I don’t understand how you hope to cause EA orgs to change in these ways.
Maybe your ToC is that core org EAs read your post (and similar posts) and are intellectually persuaded by your suggestions, and adopt them.
If that’s your goal, I think you should try harder to understand why core org EAs currently don’t agree with your suggestions, and try to address their cruxes. For this ToC, “upvotes on the EA Forum” is a useless metric—all you should care about is persuading a few people who have already thought about this all a lot. I don’t think that your post here is very well optimized for this ToC.
(Note that this doesn’t mean that they think your suggestions aren’t net positive, but these people are extremely busy and have to choose to pursue only a tiny fraction of the good-seeming things (which are the best-seeming-to-them things) so demonstrating that something is net positive isn’t nearly enough.)
Also, if this is the ToC, I think your tone should be one of politely suggesting ways that someone might be able to do better work by their own lights. IMO, if some EA funder wants to fund an EA org to do things a particular way, you have no particular right to demand that they do things differently, you just have the ability to try to persuade them (and it’s their prerogative whether to listen).
For what it’s worth, I am very skeptical that this ToC will work. I personally think that this post is very unpersuasive, and I’d be very surprised if I changed my mind to agree with it in the next year, because I think the arguments it makes are weak (and I’ve been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
Maybe your ToC is that other EAs read your post and are persuaded by your suggestions, and then pressure the core org EAs to adopt some of your suggestions even though they disagree with them.
If so, you need to think about which ways EAs can actually apply pressure to the core org EAs. For example, as someone who prioritizes short-AI-timelines longtermist work over global health and development, I am not very incentivized to care about whether random GWWC members will stop associating with EA if EA orgs don’t change in some particular way. In contrast, if you convinced all the longtermist EAs that they should be very skeptical of working on longtermism until there was a redteaming process like the one you described, I’d feel seriously incentivized to work on that redteaming process. Right now, the people I want to hire mostly don’t agree with you that the redteaming process you named would be very informative; I encourage you to try to persuade them otherwise.
Also, I think you should just generally be scared that this strategy won’t work? You want core org EAs to change a bunch of things in a bunch of really specific ways, and I don’t think that you’re actually going to be able to apply pressure very accurately (for similar reasons that it’s hard for leaders of the environmentalist movement to cause very specific regulations to get passed).
(Note that I don’t think you should engage in uncooperative behavior (e.g. trying to set things up specifically so that EA orgs will experience damage unless they do a particular thing). I think it’s totally fair game to try to persuade people of things that are true because you think that that will cause those people to do better things by their own lights; I think it’s not fair game to try to persuade people of things because you want to force someone’s hand by damaging them. Happy to try to explain more about what I mean here if necessary; for what it’s worth I don’t think that this post advocates being uncooperative.)
Perhaps you think that the core org EAs think of themselves as having a duty to defer to self-identified EAs, and so if you can just persuade a majority of self-identified EAs, the core org EAs will dutifully adopt all the suggestions those self-identified EAs want.
I don’t think this is realistic–I don’t think that core EA orgs mostly think of themselves as executing on the community’s wishes, I think they (as they IMO should) think of themselves as trying to do as much good as possible (subject to the constraint of being honest and reliable etc).
I am somewhat sympathetic to the perspective that EA orgs have implied that they do think of themselves as trying to represent the will of the community, rather than just viewing the community as a vehicle via which they might accomplish some of their altruistic goals. Inasmuch as this is true, I think it’s bad behavior from these orgs. I personally try to be clear about this when I’m talking to people.
Maybe your ToC is that you’re going to start up a new set of EA orgs/projects yourself, and compete with current EA orgs on the marketplace of ideas for funding, talent, etc? (Or perhaps you hope that some reader of this post will be inspired to do this?)
I think it would be great if you did this and succeeded. I think you will fail, but inasmuch as I’m wrong it would be great if you proved me wrong, and I’d respect you for actively trying much more than I respect you for complaining that other people disagree with you.
If you wrote a post trying to persuade EA donors that they should, instead of other options, donate to an org that you started that will do many of the research projects you suggested here, I would think that it was cool and admirable that you’d done that.
For many of these suggestions, you wouldn’t even need to start orgs. E.g. you could organize/fundraise for research into “the circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought, by what criteria, and how this varies by subject/domain”.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
Maybe you don’t have any hope that anything will change, but you heuristically believe that it’s good anyway to write up lists of ways that you think other people are behaving suboptimally. For example, I have some sympathy for people who write op-eds complaining about ways that their government is making poor choices, even if they don’t have a detailed theory of change.
I think this is a fine thing to do, when you don’t have more productive ways to channel your energy. In the case of this post in particular, I feel like there are many more promising theories of change available, and I think I want to urge people who agree with it to pursue those.
Overall my main complaint about this post is that it feels like it’s fundamentally taking an unproductive stance–I feel like it’s sort of acting as if its goal is to persuade core EAs, but actually it’s just using that as an excuse to ineffectually complain or socially pressure; if it were trying to persuade, more attention would be paid to tradeoffs and cruxes. People sympathetic to the perspective in this post should either seriously attempt to persuade, or they should resort to doing things themselves instead of complaining when others don’t do those things.
(Another caveat on this comment: there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them.)
(In general, I love competition. For example, when I was on the EAIF I explicitly told some grantees that I thought that their goal should be to outcompete CEA, and I’ve told at least one person that I’d love it if they started an org that directly competes with my org.)
I strongly downvoted this response.
The response says that EA will not change “people in EA roles [will] … choose not to”, that making constructive critiques is a waste of time “[not a] productive ways to channel your energy” and that the critique should have been better “I wish that posts like this were clearer” “you should try harder” “[maybe try] politely suggesting”.
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.
Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at considering the issues raised.
I cannot think of a more dismissive or disheartening response. I think this response will actively dissuade future critiques of EA (I feel less inclined to try my had at critical writing seeing this as the top response) and as such make the community more insular and less epistemically robust. Also I think this response will make the authors of this post feel like their efforts are wasted an unheard.
I think this is a weird response to what Buck wrote. Buck also isn’t paid either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.
I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express ‘I disagree’, but ‘I don’t want to read this’.
Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be glad that Buck wrote his comment as you have better idea what people like him may think.
It’s important to understand the alternative to this comment is not Buck will write 30 page detailed response. The alternative is, in my guess, just silence.
Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I’m coming from.
From my perspective, it feels like the burden of making progress in EA is substantially on the people who actually have jobs where they try to make EA go better; my take is that EA leaders are making the correct prioritization decision by spending their “time for contemplating ways to improve EA” budget mostly on other things than “reading anonymous critical EA Forum posts and engaging deeply”.
I think part of my model is that it’s totally normal for online critiques of things to not be very interesting or good, while you seem to have a strong prior that online critiques are worth engaging with in depth. Like, idk, did you know that literally anyone can make an EA Forum account and start commenting and voting? Public internet forums are famously bad; why do you believe that this one is worth engaging extensively with?
I consider this a good outcome—I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).
I understand that my comment poses some risk of causing people who would have made useful criticisms feel discouraged from doing so. My current sense is that at the margin, this cost is smaller than the other benefits of my comment?
Remember that I thought that their efforts were already wasted and unheard (by anyone who will realistically do anything about them); don’t blame the messenger here. I recommend instead blaming all the people who upvoted this post and who could, if they wanted to, help to implement many of the shovel-ready suggestions in this post, but who will instead choose not to do that.
This was a disappointing comment to read from a well-respected researcher, and negatively updates me against encouraging people to working and collaborating with you in the future, because I think it reflects a callousness as well as insensitivity towards power dynamics which I would not want to see in a manager or someone running an AI alignment organization. In my opinion, it is fair game for me to make truthful comments that cause you to feel less incentivized to write comments like this in future (though I can imagine changing my mind on this).
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is “fair game”, or a constructive way to approach what you want to say. Here is my real response:
In terms of whether it is “fair game” or not: consider some junior EA who made a comment to you, “I would prefer an EA forum without your critical writing on it”. This has basically zero implications for you. No one is going to take them seriously, unless they provide receipts and point out what they disliked. But this isn’t the case in reverse. So I think if you are someone seen to be a “powerful EA”, or someone whose opinion is taken pretty seriously, you should take significant care when making statements like this, because some people might update simply based on your views. I haven’t engaged with much of weeatquince’s work, but EA is a sufficiently small enough community that these kinds of opinions can probably a harmful impact on someone’s involvement in EA-I don’t think the disclaimers around “I no longer do grantmaking for the EAIF” are particularly reassuring on this front. For example, I imagine if Holden came and made a comment in response to someone “I find your posts unhelpful, distracting, and unpleasant. I would prefer an EA forum without your critical writing on it”, this could lead to information cascades and reputational repercussions that don’t accurately reflect weeatquince’s actual quality of work. You are not Holden, but it would be reasonable for you to expect your opinions to have sway in the EA community.
FWIW, your comment will negatively update people towards posting under their main accounts, and I think a forum environment where people feel even more inclined to make alt accounts because they are worried about reputational repercussions from someone like you coming along with a comment like “I would prefer an EA Forum without your critical writing on it” is intimidating and not ideal for community engagement. Because you haven’t provided any justification for your claim aside from Robin’s post which points at strawmanning to some extent, I don’t know what this means for my work and whether my comments will pass your bar. Why not just let other users downvote low quality comments, and if you have a particular quality bar for posts that you think the downvotes don’t capture, just filter your frontpage so you only see posts with >50 or >100 karma? If you disagree with the way people running the forum are using the karma system, or their idea for who should post and what the signal:noise ratio should be, you should take that to the EA forum folks. Because if I was a new EA member, I’d be deleting my draft posts after reading a comment like this, and find it disconcerting that I’m being encouraged to post by the mods but might bump into senior EA members who say this about my good-faith contributions.
As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.
Thanks for your comment. I think your comment seems to me like it’s equivocating between two things: whether I negatively judge people for writing certain things, and whether I publicly say that I think certain content makes the EA Forum worse. In particular, I did the latter, but you’re worrying about the former.
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should, but for what it’s worth I am very quick to forgive and don’t hold long grudges. Also, it’s quite rare for me to update against someone substantially from a single piece of writing of theirs that I disliked. In general, I think people in EA worry too much about being judged negatively for saying things and underestimate how forgiving people are (especially if a year passes or if you say particularly reasonable things in the meantime).
@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of critical writing on the EA Forum and critiques of specific people (me or the OP author).
I agree! But given this, I think the two things you mention often feel highly correlated, and it’s hard for people to actually know that when you make a statement like that, that there’s no negative judgement either from you, nor from other readers of your statement. It also feels a bit weird to suggest there’s no negative judgement if you also think the forum is a better place without their critical writing?
I also agree with this, which is why I wanted to push back on your comment, because I think it would be understandable for someone to read your comment and worry more about being judged negatively, and if you think people are poorly calibrated, you should err on the side of giving people reasons to update in the right direction, instead of potentially exacerbating the misconception.
I think you and Buck are saying different things:
you are saying “people in EA should worry less about being judged negatively, because they won’t be judged negatively”,
Buck is saying “people in EA should worry less about being judged negatively, because it’s not so bad to be judged negatively”.
I think these points have opposite implications about whether to post judgemental comments, and about what impact a judgemental comment should have on you.
Oh interesting-I hadn’t noticed that interpretation, thanks for pointing it out. That being said I do think it’s much easier for someone in a more established senior position, who isn’t particularly at risk of bad outcomes from negative judgements, to suggest that negative judgements are not so bad or use that as a justification for making negative judgements.
I think this is somewhat unfair. I think it is unfair to describe this OP as “unpleasant”, it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.
By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.
I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).
Thanks for your offer to receive critical feedback.
Thank you Buck that makes sense :-)
I think we very strongly disagree on this. I think critical posts like this have a very positive effect on discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content.
I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)
I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms.
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better
I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
I thought Buck’s comment contained useful information, but was also impolite. I can see why people in favour of these proposals would find it frustrating to read.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I’d be pretty disappointed if the core EAs didn’t read this and take the ideas seriously (ive tried tagging dome on twitter), and if you’re correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I’d be concerned about where there was places for people to get their ideas taken seriously. I’m lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren’t so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?
I think that the class of arguments in this post deserve to be considered carefully, but I’m personally fine with having considered them in the past and decided that I’m unpersuaded by them, and I don’t think that “there is an EA Forum post with a lot of discussion” is a strong enough signal that I should take the time to re-evaluate a bunch—the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.
(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)
I’d be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don’t think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but I think your original replied has essentially tried to play the man and not the ball, and I would expect better from a self-identified ‘central EA’ (not saying this is some massive failing, and I’m sure I’ve done similar myself a few times)
I interpreted Buck’s comment differently. His comment reads to me, not so much like “playing the man,” and more like “telling the man that he might be better off playing a different game.” If someone doesn’t have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I’d guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.
Maybe your correct, and that’s definitely how I interpreted it initially, but Buck’s response to me gave a different impression. Maybe I’m wrong, but it just strikes me as a little strange if Buck feels they have considered these ideas and basically rejects them, why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better. Maybe I’m wrong or have misinterpreted something though, I wouldn’t be surprised
My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others’ thinking (“inasmuch as I’m wrong it would be great if you proved me wrong”). In other words, I’d guess he was like, “I think you’re probably mistaken, but in case you’re right, it’d be in both of our interests for you to convince me of that, and you’ll only be able to do that if you take a different approach.”
[Edit: This is less clear to me now—see Gideon’s reply pointing out a more recent comment.]
I guess I’m a bit skeptical of this, given that Buck has said this to weeatquince “I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future”.
This evidence doesn’t update me very much.
I interpret this quote to be saying, “this style of criticism — which seems to lack a ToC and especially fails to engage with the cruxes its critics have, which feels much closer to shouting into the void than making progress on existing disagreements — is bad for the forum discourse by my lights. And it’s fine for me to dissuade people from writing content which hurts discourse”
Buck’s top-level comment is gesturing at a “How to productively criticize EA via a forum post, according to Buck”, and I think it’s noble to explain this to somebody even if you don’t think their proposals are good. I think the discourse around the EA community and criticisms would be significantly better if everybody read Buck’s top level comment, and I plan on making it the reference I send to people on the topic.
Personally I disagree with many of the proposals in this post and I also wish the people writing it had a better ToC, especially one that helps make progress on the disagreement, e.g., by commissioning a research project to better understand a relevant consideration, or by steelmanning existing positions held by people like me, with the intent to identify the best arguments for both sides.
My interpretation of Buck’s comment is that he’s saying that, insofar as he’s read the post, he sees that it’s largely full of ideas that he’s specifically considered and dismissed in the past, although he is not confident that he’s correct in every particular.
You want him to explain why he dismissed them in the past
And are confused about why he’d encourage other people to champion the ideas he disagrees with
I think the explanation is that Buck is pretty pessimistic that these are by and large good ideas, enough not to commit more of his time to considering each one individually more than he has in the past. However, he sees that the authors are thinking about them a lot right now, and is inviting them to compete or collaborate effectively—to put these ideas to a real test of persuasion and execution. That seems far from “poo-poohing” to me. It’s a piece of thoughtful corrective feedback.
You have asked Buck to “lay out in depth” his reasons for rejecting all the content in this post. That seems like a big ask to me, particularly given that he does not think they are good ideas. It would be like asking an evolutionary biologist to “lay out in depth” their reasons for rejecting all the arguments in Of Pandas and People. Or, for a personal example, I went to the AAAS conference right before COVID hit, and got to enjoy the spectacle of a climate change denier getting up and asking in front of the ballroom whether the geoengineering scientist who’d been speaking whether scientists had considered the possibility that the Earth is warming up because it’s getting closer to the sun. His response was “YES WE’VE CONSIDERED IT.”
If that question asker went home, wrote a whole book full of reasons why the Earth might be moving closer to the sun, posted it online, and it got a bunch of upvotes, I don’t think that means that suddenly the scientist needs to consider all of the arguments more closely, revisit the issue, or that rejecting the ideas gives one an obligation to explain all of one’s reasons.
One way you could address this problem is by choosing one specific argument from this post that you find most compelling, and seeing if you can invite Buck into a debate on that topic, or to explain his thinking on it. I often find that to be productive of good conversation. But your comment read to me as an attempt to both mischaracterize the tone of Buck’s comment and and call into question the degree to which he’s thought about these issues. If you are accusing him of not actually having given these ideas as much thought as he claims, I think you should come right out and say it.
I agree with the text of your comment but think it’d be better if you chose your analogy to be about things that are more contested (rather than clearly false like creationism or AGW denial or whatever).
This avoids the connotation that Buck is clearly right to dismiss such criticisms.
One better analogy that comes to mind is asking Catholic theologians about the implausibility of a virgin birth, but unfortunately, I think religious connotations have their own problems.
I agree that this would have been better, but it was the example that came to mind and I’m going to trust readers to take it as a loose analogy, not a claim about which side is correct in the debate.
Fair! I think having maximally accurate analogies that helps people be truth-seeking is hard, and of course the opportunity costs of maximally cooperative writing is high.
I’m sympathetic to the position that it’s bad for me to just post meta-level takes without defending my object-level position.
Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.
I took the time to read through and post where I agree and disagree, however, I understand why people might not have wanted to spend the time given that the document didn’t really try to engage very hard with the reasons for not implementing these proposals. I feel bad saying that because the authors clearly put a lot of time and effort into it, but I honestly think it would have been better if the group had chosen a narrower scope and focused on making a persuasive argument for that. And then maybe worked on next section after that.
But who knows? There seems to be a bit of energy around this post, so maybe something comes out of this regardless.
I think you’re right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of “please, core EA orgs, start telling people that they should be different in these ways” rather than “here is my argument for why people should be different in these ways”).
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.
Obviously it’s the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I’m just noting that it seems unlikely to me that this post will actually persuade EA orgs to do things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.
If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn’t there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes.
As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.
I think Lark’s response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal “OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman”. This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.
I don’t think the post properly engages with the question “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”. I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn’t contribute much novel insight.
I encourage people to write posts on the topic of “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”; perhaps such posts could look at historical examples, or mechanisms via which powerful people can get the echo-chamber-reduction effects without the random-people-now-use-your-resources-to-do-their-random-goals effect.
Some things that I might come to regret about my comment:
I think it’s plausible that it’s bad for me to refer to disagreeing with arguments without explaining why.
I’ve realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I’m less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
I was not very transparent about my goal with this comment, which is generally a bad sign. My main goal was to argue that posts like this are a kind of unhealthy way of engaging with EA, and that readers should be more inclined to respond with “so why aren’t you doing anything” when they read such criticisms.
Fwiw I think there was an acknowledgement of soft power missing.
I strongly disagree with this response, and find it bizarre.
I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.
I agree with freedomandutility’s description of this as an “isolated demand for [something like] rigor”.
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
I think that’s particularly true of some of the calls for democratization. The Cynic’s Golden Rule (“He who has the gold, makes the rules”) has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren’t happy with the idea of random EAs spending their money, it just isn’t going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn’t be—someone is going to take the donor’s money in almost all cases, and there’s no EA High Council to somehow cast the rebel grantee from the movement.
Speaking as a moderate reform advocate, the flipside of this is that the EA community has to acknowledge the origin of power and not assume that the ecosystem is somehow immune to the Cynic’s Golden Rule. The people with power and influence in 2023 may (or may not) be wise and virtuous, but they are not in power (directly) because they are wise and virtuous. They have power and influence in large part because it has been granted to them by Moskovitz and Tuna (or their delegates, or by others with power to move funding and other resources). If Moskovitz and Tuna decided to fire Open Phil tomorrow and make all their spending decisions based on my personal recommendations, I would become immensely powerful and influential within EA irrespective of how wise and virtuous I may be. (If they are reading, this would be a terrible idea!!)
“If elites haven’t already thought of/decided to implement these ideas, they’re probably not very good. I won’t explain why. ”
“Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won’t explain why, but I will be patronising.”
“Meaningful organisational change comes from the top down, and you should be more polite in requesting it. I doubt it’ll do anything, though.”
Do you see any similarities between your response here and the problems highlighted by the original post, Buck?
The tone policing, dismissing criticism out of hand, lack of any real object-level engagement, pretending community responsibility doesn’t exist, and patronisingly trying to shut down others is exactly the kind of chilling effect that this post is drawing attention to.
The fact that a comment from a senior community member has led to deference from other community members, leading to it becoming the top-voted comment, is not a surprise. But support for such weak critiques (using vague dismissals that things are ‘likely net-negative’ or just stating his own opinion with little to no justifications) is pretty low, however.
And the wording is so patronising and impolite, too. What a perfect case study in the kinds of behaviours EA should no longer tolerate.
Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!
May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.
One irony is that it’s often not that hard to change EA orgs’ minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]
(CEA is actually doing basically this experiment soon, and I’m >2/3 chance the results will change the front page somehow, though obviously it’s hard to predict the results of experiments in advance.)
If anyone reading this actually wants to do this experiment please DM me – I have various ideas for what might be useful and it’s probably good to coordinate so we don’t duplicate work
Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc. People in different roles have different abilities (and limitations) in moving a reform effort forward.
I think “I didn’t walk away with a clear sense of what someone like me should do if I agree with much/all of your critique” is helpful/friendly feedback. I hesitant to even mention it because the authors have put so much (unpaid!) work into this post already, and I don’t want to burden them with what could feel like the expectation of even more work. But I think it’s still worth making the point for future reference if for no other reason.
I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.
I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
Can you clarify this statement? I’m confused about a couple of things:
Why is only “arguable” that you had more power when you were an active grantmaker?
Do you mean you don’t have much power, or that you don’t use much power?
I removed “arguable” from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn’t clearly mean I had “that much” power—e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending.
I mean that I don’t have much discretionary power (except inside Redwood). I can’t unilaterally make many choices about e.g. EA resource allocation. Most of my influence comes via arguing that other people should do things with discretionary power that they have. If other people decided to stop listening to me or funding me, I wouldn’t have much recourse.
I appreciate the clarification!
It sounds to me that what you’re saying is that you don’t have any formal power over non-Redwood decisions, and most of your power comes from your ability to influence people. Furthermore, this power can be taken away from you without you having any choice in the matter. That seems fair enough. But then you seem to believe that this means you don’t actually have much power? That seems wrong to me. Am I misunderstanding something?