Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are weak.
You begin by citing the Cowen quote that “EAs couldn’t see the existential risk to FTX even though they focus on existential risk”. I think this is one of the more daft points made by a serious person on the FTX crash. Although the words ‘existential risk’ are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn’t enough attention to existential risks to FTX and the implications this would have for EA. In contrast, EAs have put umpteen person hours into assessing existential risks to humanity and the epistemic standards used to do that are completely different to those used to assess FTX.
You cite research purporting to show that diversity of some form is good for collective epistemics and general performance. I haven’t read the book that you cite, but I have looked into some of this literature, and as one might expect for a topic that is so politically charged, a lot of the literature is not good, and some of the literature actually points in the opposite direction, even though it is career suicide to criticise diversity, and there are likely personal costs even for me discussing counter-arguments here. For example, this paper suggests that group performance is mainly determined by the individual intelligence of the group members not by things like gender diversity. This paper lists various costs of team diversity that are bad for collective dynamics. You say that diversity “essentially along all dimensions” is good for epistemics. This is the sort of claim that sounds good, but also seems to be clearly false. I seldom see people who make this argument suggest that we need more Trump supporters, religious fundamentalists, homophobes or people without formal education in order to improve our performance as a community. These are all huge chunks of the national/global community but also massively underrepresented in EA. There are lots of communities that are much more diverse than EA but which also seem to have far worse epistemics than EA. Examples include Catholicism, Trumpism, environmentalism, support of Bolsonaro/Modi etc.
Relatedly, I think value alignment is very important. I have worked in organisations with a mix of EA and non EA people and it definitely made things much harder than if everyone were aligned, holding other things equal. On one level, it is not surprising that a movement trying to achieve something would agree not just at a very abstract level, but also about many concrete things about the world. If I think that stopping AI progress is good and you think it is bad, it is going to be much harder (though not impossible, per moral trade) for us to achieve things in the world. Same for speeding up progress in virus synthesis. The 80,000 Hours articles on goal directed groups are very good on this.
I don’t agree that EA is hostile to criticism. In fact it seems unusually open to criticism, and rational discussion of ideas rather than dismissing them on the basis of vibe/mood affiliation/political amenability. Aside from the controversial Cremer and Kemp case (who didn’t publish pseudonymously) what are the major critiques that have been presented pseudonymously or have caused serious personal consequences for the critics? By your definition, I think my critique of GiveWell counts as deep, but I have been rewarded for this because people thought the arguments were good. To stress, mine and Hauke’s claim was that most of the money EA has spent has been highly suboptimal.
You say “For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people.” This isn’t just an assumption of EA, but a central finding of psychological science that things that are usually classed as intellectual abilities are strongly correlated—the g factor. eg maths ability is correlated with social science ability, and english literature ability etc.
I just don’t think it is true that we align well with the interests of tech billionaires. We’ve managed to persuade two billionaires of EA and one believed in EA before he became a billionaire. The vast majority of billionaires evidently don’t buy it and go off and do their own thing, mainly donating to things that sound good in their country, to climate change, or not donating at all. Longtermist EAs would like lots more money to spent on AI alignment, slowing down AI progress, on slowing down progress in virology or increasing spending on counter-measures, and on preventing major wars. I don’t see how any of these things promise to benefit tech founders as a particular constituency in any meaningful way. That being said, I agree that there is a problem with rich people becoming spokespeople for the community or overly determining what gets done and we need far better systems to protect against that in future. eg FTX suddenly deciding to do all this political stuff was a big break from previous wisdom and wasn’t questioned enough.
On a personal note, I get that I am a non-expert in climate, and so am wide open to criticism as an interloper (though I have published a paper on climate change). But then it is also true that getting climate people to think in EA terms is very difficult. Also, the view I recently outlined is basically in line with all climate economics. In that sense the view I hold and I think is widely held in longtermist EA is in line with one expert consensus. Indeed, it is striking that this is the one group that actually tries to quantify the aggregate costs of climate change. I also don’t think there are any areas where I disagree with the line taken by the IPCC which is supposed to express the expert consensus on climate. The view that 4ºC is going to kill everyone is one held by some activists and a small number of scientists. Either way we need to explain why we are ignoring all the climate economists and listening to Rockstrom/Lenton instead. On planetary boundaries, as far as I know, I am the only EA to have criticised planetary boundaries, and I don’t dismiss it in passing, but in fact at considerable length. The reviewer I had for that section is a prof and strongly agreed with me.
Differential tech progress has been subject to peer review. The Bostrom articles on it are peer reviewed.
The implications of democratising EA are mindboggling. Suppose that Open Phil’s spending decisions are made democratically by EAs. This would put EAs in charge of ~$10bn. We’d then need to decide who counts as an EA. Because so much money would be on the table, lots of people who we wouldn’t class as EAs would want a say, and it would be undemocratic to exclude them (I assume). So, the ‘EA franchise’ would expand to anyone who wants a say (?) I don’t know where the money would end up after all this, but it’s fair to say that money spent on reducing engineered pandemics, AI and farm animal welfare would fall from the current pitiful sum to close to zero.
You say that worker self-management has been proven to be better for mission-oriented work than top-down rule. This is clearly false. There is a tiny pocket of worker cooperatives (eg in the Basque region) who have been fairly successful. But almost all companies are run oligarchially in a top-down fashion by boards or leadership groups.
Overall, we need to learn hard lessons from the FTX debacle. But thus far, the collapse has mainly been used to argue for things that are completely unrelated to FTX, and mainly to an advance an agenda that has been disfavoured in EA so far, and with good reason. For Cowen, this was neoliberal progress, here it is left wing environmentalism.
I agree with most of your points, but strongly disagree with number 1 and surprised to have heard over time that so many people thought this point was daft.
I don’t disagree that “existential risk” is being employed in a very different sense, so we agree there, in the two instances, but the broader point, which I think is valid, is this:
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
Of course, the people who sank untold hours into existential risk research aren’t to blame, and it isn’t an argument against x-risk/longtermist work, but it does show that EA, as a community missed something dire and critical and importantly something that couldn’t be closer to home for the community. And in my opinion that does shed light on how successful one should expect the longer term endeavours of the community to be.
Scott Alexander, from “If The Media Reported On Other Things Like It Does Effective Altruism”:
The difference in that example is that Scholtz is one person so the analogy doesn’t hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/longterm risks to the movement.
I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example.
At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.
I think that is a charitable interpretation of Cowen’s statement: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.”
I think charitably, he isn’t saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?
I think I just don’t agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:
I think that’s plain wrong, and Cowen actually is doing the cheap rhetorical trick of “existential risk in one context equals existential risk in another context”. I like Cowen normally, but IMO Scott’s parody is dead on.
“EA didn’t spot the risk of FTX and so they need better PR/management/whatever” would be fine, but I don’t think he was saying that.
Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational person’s belief in the output of what EA has to offer and also downgrade the trust they are getting it right.
Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like “daft”.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re John’s point 1.
If you’re open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen’s post to explain why I disagreed with his point:
You made a further point, Devon, that I want to respond to as well:
I agree with you here. However, I think the hubris was SBF’s hubris, not EAs’ or longtermists-in-general’s hubris.
I’d even go further to say that it wasn’t the Future Fund team’s hubris.
As John commented below, “EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.”
But that’s a critique of the Future Fund’s (and others’) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I don’t even consider the Future Fund team’s failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fund’s Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EA’s and longtermists’ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as I’m aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I don’t fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoples’ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification:
“Leading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldn’t even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves or those nearby in the present?”
Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund team’s domain expertise on EA money, like something they shouldn’t be able to miss.
Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right?
I shared the modification with an EA that—like me—at first agreed with Cowen. Their response was something like “OK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressed—but I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if it”
However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they can’t work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, assume climatology research benefits greatly from climatologists getting close to hurricanes then we should expect climatologists, as a group, to see more hurricane wounds. In that case we should update, but not as strongly, if climatologists get hurricane wounds.
Ultimately I updated from agree with Cowen to disagree with Cowen after thinking this through. I’d be curious if and where you disagree with this.
Tbh I took the Gell-Mann amnesia interpretation and just concluded that he’s probably being daft more often in areas I don’t know so much about.
This is what Cowen was doing with his original remark.
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of “existential risk” (which I think is a semantics game but others disagree)?
Cowen is saying that he thinks EA is less generally competent because of not seeing the x-risk to the Future Fund.
Again if this was true he would not specifically phrase it as existential risk (unless maybe he was actively trying to mislead)
Fair enough. The implication is there though.
Imagine a forecaster that you haven’t previously heard of told you that there’s a high probability of a new novel pandemic (“pigeon flu”) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.
With a quote like
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus “existential risk to FF” is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they’ve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). It’s embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it’s embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase “existential risk” is just a semantics game tying them together (in the same way that “why would I trust the Bayesian updates in your pigeon flu forecasting when you’ve made elementary math errors in a Bayesian statistics paper” is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you don’t have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread—tabooing “existential risk” and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as “longtermist” didn’t last a year!
Funnily enough, the “pigeon flu” example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
I agree that is the other way out of the puzzle. I wonder whom to even trust if everyone is susceptible to this problem...
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try “taking the hypothesis that EA...” and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
I don’t think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesn’t update my assessment of his views on climate x-risk that much. In contrast, if the climate scientist’s organization built its headquarters in a flood plain and didn’t buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them.
In contrast, EA (and the FF in particular) asserts/ed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under one’s nose would count. While I don’t think “existential risk in one context equals existential risk in another context,” I don’t think the past performance has no bearing on estimates of future performance either.
I think assessing the extent to which the “miss” on FTX should cause a reasonable observer to downgrade EA’s x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasn’t even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hindsight, that is only a minor demerit for me.
To perhaps make it clearer: I think EA is trying to be expert in “existential risks to humanity”, and that really does have almost no overlap with “existential risks to individual firms or organizations”.
Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. You’d need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially hold—the death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesn’t work for me.
More generally, I think it’s probably reasonable to downgrade for missing FTX on “general competence” and “ability to predict and manage risk” as well. I think both of those attributes are correlated with “ability to predict and manage existential risk,” the latter more so than the former. Given that existential-risk expertise is a difficult attribute to measure, it’s reasonable to downgrade when downgrading one’s assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isn’t nearly as heavily loaded on “ability to predict and manage existential risk.” It’s primarily loaded on domain-specific expertise in climate science, and missing FTX wouldn’t make me think materially less of the relevant people as scientists.
To be clear, I’m not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldn’t come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one.
Alternatively, one could have said something like
This, too, would not have been a good argument.
Scott’s analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It’s not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it’s not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
This. We can taboo the words “existential risk” and focus instead on Longtermism. It’s damning that the largest philanthropy focused on Longtermism—the very long term future of humanity—didn’t even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
Perhaps, you know, they were focused more on the long term and not the short term?
You can’t build a temple that lasts 1000 years without first ensuring that it’s on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
Are we trying to build a temple?
My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important.
From that perspective I’m not sure that institutional longevity is really a priority and certainly don’t think that we can infer that longtermists aren’t indeed focused on the long term.
Longtermism is wider than catastrophic risk reduction—e.g. it also encompasses “trajectory changes”. It’s about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn’t done great as a brand so far.)
Hi John,
Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
These things are related, and must be treated as such.
Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.
We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this.
On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.
This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.
We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.
If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g. here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,
In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.
Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.
We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*
We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying.
Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk.
We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change.
Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.
Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.
Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.
Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
Thanks for the detailed response.
I agree that we don’t want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn’t be ‘democratic’ in any meaningful sense.
I don’t have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word ‘existential risk’ doesn’t change that fact.
Since you don’t want diversity essentially along all dimensions, what sort of diversity would you like? You don’t want Trump supporters; do you want more Marxists? You apparently don’t want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase.
The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call ‘TUA’ (which I think is a bit of a silly label—how is it techno-utopian to think we’re all going to be killed by technology?) is actually good, then value alignment on it seems good.
You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It’s somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously. I don’t think it was rational for you to publish the post pseudonymously—I don’t think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I’m not a researcher at a core EA org any more.)
I don’t think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn’t heard of any of the authors of the interest rate piece before.
The main example you give is the reception to the Cremer and Kemp pice, but I haven’t seen any evidence that they did actually get the reception they claimed.
I’m not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states “There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g.”
“In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.” I don’t think this gambit is open to you—your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding.
Of course, there are more things to life and to performance at work than intelligence.
As I mentioned in my first comment, it’s not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don’t help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. “the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.” One of the headings in your article is “We align suspiciously well with the interests of tech billionaires (and ourselves)”. I don’t see how anything you have said here is a good defence against my criticism of that claim.
There’s a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario.
A lot of what you say in your response and in your article seems inconsistent—you make a point of saying that EAs ignore the experts but then dismiss the experts when that happens to be inconsistent with your preferred opinions. Examples:
Defending postcolonialism in global development
Your explanation of why Walmart makes money vs mainstream economics.
Your dismissal of all climate economics and the IPCC
‘Standpoint theory’ vs analytical philosophy
Your dismissal of Bayesianism, which doesn’t seem to be aware of any of the main arguments for Bayesianism.
Your dismissal of the g factor, which doesn’t seem to be aware of the literature in psychology.
The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
Your defence of critical realism
Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. “For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C − 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.”
Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It’s kind of true but a bit misleading to say that this ‘could’ lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like “humanity is a virus” are environmentalists.
Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don’t actually produce any quantitative evidence and it’s completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There’s nothing missing in not modelling the tails here.
ok
What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it’s still not very democratic—as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something?
We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn’t give it money, and I doubt Moskovitz would either. I’m not sure what your proposal is for what we’re supposed to do after this happens.
I actually think corporations are involved in collaborative mission-driven work, and your Mondragon example seems to grant this, though perhaps you are understanding ‘mission’ differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said “Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule”. The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person’s influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions.
“Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.” Evidently from the comments I’m not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment.
If you had spent the post talking about free markets, economic growth and admonishing the woke, I think people would have taken away a different message, but you didn’t do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don’t think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don’t want more right wing people. At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good.
I don’t disagree with what is written here but the tone feels a bit aggressive/adversarial/non-collegial IMHO.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.
‘it is career suicide to criticise diversity’ This seems seriously hyperbolic to me, though I agree that if your down diversity, a non-negligible number of people will disapprove and assume you are right-wing/racist, and that could have career consequences. What’s your best guess as to the proportion of academics who have had their careers seriously damaged for criticizing diversity in the fairly mild way you suggest here (i.e. that as a very generic thing, it does not improve accuracy of group decision-making), relative to those who have made such criticisms?
What percentage of Chinese people have ever been arrested for subversion?
Strong agree with most of these points; the OP seems to not… engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?
EDIT: Oh! It was rockstrom, but the actual quote is: “The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3” from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change. There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you’re interested. The accompanying report is available here.
The quote is:
“Action on climate change is a matter of intra- and intergenerational justice, because climate change impacts already have affected and continue to affect vulnerable people and countries who have least contributed to the problem (Taconet et al., Reference Taconet, Méjean and Guivarch2020). Contribution to climate change is vastly skewed in terms of wealth: the richest 10% of the world population was responsible for 52% of cumulative carbon emissions based on all of the goods and services they consumed through the 1990–2015 period, while the poorest 50% accounted only for 7% (Gore, Reference Gore2020; Oswald et al., Reference Oswald, Owen, Steinberger, Yannick, Owen and Steinberger2020).
A just distribution of the global carbon budget (a conceptual tool used to guide policy) (Matthews et al., Reference Matthews, Tokarska, Nicholls, Rogelj, Canadell, Friedlingstein, Thomas, Frölicher, Forster, Gillett, Ilyina, Jackson, Jones, Koven, Knutti, MacDougall, Meinshausen, Mengis, Séférian and Zickfeld2020) would require the richest 1% to reduce their current emissions by at least a factor of 30, while per capita emissions of the poorest 50% could increase by around three times their current levels on average (UNEP, 2020). Rich countries’ current and promised action does not adequately respond to the climate crisis in general, and, in particular, does not take responsibility for the disparity of emissions and impacts (Zimm & Nakicenovic, Reference Zimm and Nakicenovic2020). For instance, commitments based on Nationally Determined Contributions under the Paris Agreement are insufficient for achieving net-zero reduction targets (United Nations Environment Programme, 2020).”
Whether 1.5 is really in reach anymore is debatable. We’re approaching an El Nino year, it could be a big one, we could see more heat in the atmosphere then, let’s see how close we get to 1.5 GAST then. It won’t be a true GAST value, I suppose, but there’s no way we’re stopping at 1.5 according to Peter Carter:
“This provides more conclusive evidence that limiting to 1.5C is impossible, and only immediate global emissions decline can possibly prevent a warming of 2C by 2050”
and goes on from there.… He prefers CO2e and radiative forcing rather than the carbon budget approach as mitigation assessment measures. It’s worth a viewing as well.
There’s quite a lot to unpack in just these two sources, if you’re interested.
Then there’s Al Gore at the World Economic Forum, who drops some truth bombs: “Are we going to be able to discuss… or putting the oil industry in charge of the COP … we’re not going to disguise it anymore”
OLD:I believe it was Rockstrom, though I’m looking for the reference, who said that citizens of developed countries needed to cut their per capita carbon production by 30X, while in developing countries people could increase it by 3X. That’s not a quote, but I think the numbers are right.
That is a counterpoint to the analysis made by some climate economists.
When I find the reference I’ll share it, because I think he was quoting an analysis from somewhere else, and that could be useful to your analysis given the sources you favor, even if you discount Rockstrom.
Overall this post seems like a grab-bag of not very closely connected suggestions. Many of them directly contradict each other. For example, you suggest that EA organizations should prefer to hire domain experts over EA-aligned individuals. And you also suggest that EA orgs should be run democratically. But if you hire a load of non-EAs and then you let them control the org… you don’t have an EA org any more. Similarly, you bemoan that people feel the need to use pseudonyms to express their opinions and a lack of diversity of political beliefs … and then criticize named individuals for being ‘worryingly close to racist, misogynistic, and even fascist ideas’ in essentially a classic example of the cancel culture that causes people to choose pseudonyms and causes the movement to be monolithically left wing.
I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.
It is true this does not apply to all of the proposals. I agree that, for example, EAs should re-invent the wheel less and utilize domain expertise more. But I can’t say this post really caused me to update in their favour vs just randomly including some proposals I already agreed with. I think you would have been better off focusing on a smaller number of proposals and developing the arguments for them in more depth—and in particular considering counterarguments.
Well stated. This post’s heart is in the right place, and I think some of its proposals are non-accidentally correct. However, it seems that many of the post’s suggestions boil down to “dilute what it means to be EA to just being part of common left-wing thought”. Here’s a sampling of the post’s recommendations which provoke this:
EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
EAs should not assume that we must attach a number to everything, and should be curious about why most academic and professional communities do not
EA institutions should select for diversity
Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
Subject-matter experts from outside EA
Researchers, practitioners, and stakeholders from outside of our elite communities
For instance, we need a far greater input from people from Indigenous communities and the Global South
EAs should consider the impact of EA’s cultural, historical, and disciplinary roots on its paradigmatic methods, assumptions, and prioritisations
Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality
Tobacco?
Gambling?
Mass surveillance?
Arms manufacturing?
Cryptocurrency?
Fossil fuels?
Within 5 years, EA funding decisions should be made collectively
EA institutions should be democratised within 3 years, with strategic, funding, and hiring policy decisions being made via democratic processes rather than by the institute director or CEO
EAs should make an effort to become more aware of EA’s cultural links to eugenic, reactionary and right-wing accelerationist politics, and take steps to identify areas of overlap or inheritance in order to avoid indirectly supporting such views or inadvertently accepting their framings
I don’t think the point is that all of the proposals are inherently correct or should be implemented. I don’t agree with all of the suggestions (agree with quite a few, don’t agree with some others), but in the introduction to the ‘Suggested Reforms’ section they literally say:
Picking out in particular the parts you don’t agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don’t think I’ve seen an 84 minute forum post before). But I’m not claiming this was intentional on either of your parts.
If taking positions that are percieved as left wing makes EA more correct and more effective, then EA should still take up those positions. The OP has made great effort to justify these points from a practical position of pursuing truth, effectiveness, and altruism, and they should not be dismissed just because they happen to fall on one side of the political spectrum. Similarly, just because an action makes EA less distinct, it doesn’t mean it’s not the correct thing to do.
This is true, but to the extent that these changes would make EA look/act like already-existing actors, I think it is fair to consider (1) how effective the similiar actors are, and (2) the marginal benefit of having more muscle in or adjacent to the space those actors occupy.
Also, because I think a clear leftward drift would have significant costs, I also think identifying the drift and those costs is a fair critique. As you move closer to a political pole, the range of people who will want to engage with your movement is likely to dwindle. Most people don’t want to work in, or donate to, a movement that doesnt feel respecting toward them—which I think is a strong tendency of almost all political poles.
At present, I think you can be moderately conservative or at least centrist by US standards and find a role and a place where you feel like you fit in. I think giving that range up has significant costs.
I think a moderate leftward shift on certain issues would actually increase popularity. The current dominant politics of EA seems to be a kind of steven pinker style techno-liberalism, with a free speech absolutist stance and a vague unease about social justice activism. Whether or not you agree with this position, I think it’s popularity among the general public is fairly low, and a shift to mainstream liberal (or mainstream conservative) opinions would make EA more appealling overall. For example, a policy of banning all discussion of “race science” would in the long term probably bring in much more people than it deterred, because almost everybody finds discussing that topic unpleasant.
If your response to this is “wait, there are other principles at play that we need to take into consideration here, not just chasing what is popular”, then you understand the reasons why I don’t find ” these positions would make EA more left wing” to be a very strong argument against them. If following principles pushes EA one way or the other, then so be it.
Fwiw, I think your view that a leftward shift in EA would increase popularity is probably Americocentric. I doubt it is true if you were to consider EA as a global movement rather than just a western one.
Also, fwiw, I’ve lost track of how many people I’ve seen dismiss EA as “dumb left-wing social justice”. EAs also tend to think the consequence of saying something is what matters. So we tend to be disliked both by free speech absolutists and by people who will never concede that properly discussing some controversial topics might be more net positive than the harm caused by talking about them. Some also see EA as tech-phobic. Steven Pinker famously dismissed EA concerns about AI Alignment. If you spend time outside of EA in tech-optimism-liberal circles you see a clear divide. It isn’t culturally the same. Despite this, I think I’ve also lost count of how many people I’ve seen dismiss EA as ” right-leaning libertarian tech-utopia make-billionaires-rich nonsense”
We can’t please everyone and it is a fool’s errand to try.
One person’s “steven pinker style techno-liberalism, with a free speech absolutist stance and a vague unease about social justice activism” is another person’s “Ludite free speech blocking SJW”
If following principles does not clearly push EA one way or the other, also then so be it.
My point was more that theres a larger audience for picking one side of the political spectrum than there is for awkwardly positioning yourself in the middle in a way that annoys both sides. I think this holds for other countries as well, but of course the political battles are different. If you wanted to appeal more to western europe you’d go left, to eastern europe you’d go right, to China you’d go some weird combination of left and right, etc.
Really, I’m making the same point as you: chasing popularity at the expense of principles is a fools errand.
I think there’s a difference between “people in EA tend to have X, Y, and Z views” and those views being actively promoted by major orgs (which is the most natural reading of the proposal to me). Also, although free speech absolutism may not be popular in toto, most points on the US political spectrum at least find some common ground with that stance (they will agree on the outcome for certain forms of controversial speech).
I also think it likely that EA will need significant cooperation from the political system on certain things, particularly involving x-risk, and that becoming strongly left-identified sharply increases the risk you’ll be summarily dismissed by a house of Congress, the White House, or non-US equivalents.
I don’t think “race science” has any place in EA spaces, by the way.
Agree with this. We should de-politicize issues, if anything. Take Climate Change for example. Heavily politicized. But EA is not left wing because 80k hours acknowledges the severity and reality of CC—it is simply very likely to be true. And if truth happens to be more frequent in left wing perspectives then so be it.
I agree with you that EA shouldn’t be prevented from adopting effective positions just because of a perception of partisanship. However, there’s a nontrivial cost to doing so: the encouragement of political sameness within EA, and the discouragement of individuals or policymakers with political differences from joining EA or supporting EA objectives.
This cost, if realized, could fall against many of this post’s objectives:
We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?
EA institutions should select for diversity
Along lines of:
Philosophical and political beliefs
It also plausibly increases x-risk. If EA becomes known as an effectiveness-oriented wing of a particular political party, the perception of EA policies as partisan could embolden strong resistance from the other political party. Imagine how much progress we could have had on climate change if it wasn’t a partisan issue. Now imagine it’s 2040, the political party EA affiliates with is urgently pleading for AI safety legislation and a framework for working with China on reducing x-risk, and the other party stands firmly opposed because “these out-of-touch elitist San Francisco liberals think the world’s gonna end, and want to collaborate with the Chinese!”
I agree that EA should be accepting of a wide range of political opinions (although highly extreme and hateful views should still be shunned).
I don’t think the suggestions there are necessarily at odds with that, though. For example, increasing demographic diversity is probably going to increase political diversity as well, because people from extremely similar backgrounds have fairly similar politics. If you expand to people from rural background, you’re more likely to get a country conservative, if you encourage more women, you’re more likely to get feminists, if you encourage people from Ghana, you’ll get whole new political ideologies nobody at silicon valley has even heard of. The politics of nerdy white men like me represent a very tiny fraction of the overall political beliefs that exist in the world.
When it comes to extreme views it’s worth noting that what’s extreme depends a lot of the context.
A view like “homosexuality should be criminalized” is extreme in Silicon Valley but not in Uganda where it’s a mainstream political opinion. In my time as a forum moderator, I had to deal with a user from Uganda voicing those views and in cases, like that you have to make choice about how inclusive you want to be of people expressing very different political ideologies.
In many cases, where the political views of people in Ghana or Uganda substantially differ from those common in the US they are going to be perceived as highly extreme.
The idea, that you can be accepting of political ideologies of a place like Ghana where the political discussion is about “Yes, we already have forbidden homosexuality but the punishment seems to low to discourage that behavior” vs. “The current laws against homosexuality are enough” while at the same time shunning highly extreme views, seems to me very unrealistic.
You might find people who are from Ghana and who adopted woke values, but those aren’t giving you deep diversity in political viewpoints.
For all the talk about decolonization, Silicon Valley liberals seem always very eager when it comes to denying people from Ghana or Uganda to express mainstream political opinions from their home countries.
While on it’s face, increasing demographic diversity seems like it would result in an increase in political diversity, I don’t think that is actually true.
This rests on several assumptions:
I am looking through the lens of U.S. domestic politics, and identifying political diversity by having representation of America’s two largest political parties.
Increases in diversity will not be evenly distributed across the American population. (White Evangelicals are not being targeted in a diversity push, and we would expect the addition of college grad+ women and BIPOC.)
Of all demographic groups, white college grad+ men, “Sams,” are the most politically diverse group, at 48 D, 46R. By contrast, the groups typically understood to be represented by increased diversity:
College Grad+ Women: 65 D, 30R
There is difficulty in a lack of BIPOC breakdown by education level, but assuming that trends of increased education would result in a greater democratic disparity, these are useful lower bounds:
Black: 83 D, 10R
Hispanic: 63 D, 29 R
Asian American: 72 D, 17R
While I would caution against partisanship in the evaluation of ideas and programs, I don’t think there’s anything inherently wrong in a movement having a partisan lean to its membership. A climate change activist group can work in a non-partisan manner, but the logical consequence of their membership will be primarily Democratic voters, because that party appeals to their important issue.
I think this aspect of diversity would offer real value in terms of political diversity, and could potentially add value to EA. I think clarification on what it means to “increase diversity” are required to assess the utility. I am biased by my experience in which organizations become more “diverse” in skin color, while becoming more culturally and politically homogenous.
https://www.pewresearch.org/politics/2020/06/02/democratic-edge-in-party-identification-narrows-slightly/
Reducing “political diversity” down to the 2 bit question of “which american political party do they vote for” is a gross simplification. For example, while black people are more likely to vote democrat, a black democrat is half as likely as a white democrat to identify as “liberal”. This is because there are multiple political axes, and multiple political issues to consider, starting with the standard economic vs social political compass model.
This definitely becomes clearest when we escape from a narrow focus on elite college graduates in the US, and look at people from different nations entirely. You will have an easier time finding a Maoist in china than in texas, for example. They might vote D in the US as a result of perceiving the party as less anti-immigrant, but they’re not the same as a white D voter from the suburbs.
As for your experiences where political and ethnic diversity were anti-correlated: did the organisation make any effort on other aspects of diversity, other than skin colour, or did they just, say, swap out a couple of MIT grads of one race for a couple of MIT grads of a different race? Given that you say the culture didn’t change either, the latter seems likely.
I agree with you that many of the broad suggestions can be read that way. However, when the post suggests which concrete groups EA should target for the sake of philosophical and political diversity, they all seem to line up on one particular side of the aisle:
What politics are postcolonial critics of Western academia likely to have?
What politics are academics, professional communities, or indigenous Americans likely to have?
When the term “traditionally underrepresented groups” is used, does it typically refer to rural conservatives, or to other groups? What politics are these other groups likely to have?
As you pointed out, this post’s suggestions could be read as encouraging universal diversity, and I agree that the authors would likely endorse your explanation of the consequences of that. I also don’t think it’s unreasonable to say that this post is coded with a political lean, and that many of the post’s suggestions can be reasonably read as nudging EA towards that lean.
Hmmm, a few of these don’t sound like common left-wing thought (I hope democracy isn’t a left-wing value now), but I agree with the sentiment of your point.
I guess some of the co-writers lean towards identitarian left politics and they want EA to be more in line with this (edit: although this political leaning shouldn’t invalidate the criticisms in the piece). One of the footnotes would seem to signal their politics clearly, by linking to pieces with what I’d call a left-wing ‘hit piece’ framing:
“We should remember that EA is sometimes worryingly close to racist, misogynistic, and even fascist ideas. For instance, Scott Alexander, a blogger that is very popular within EA, and Caroline Ellison, a close associate of Sam Bankman-Fried, speak favourably about “human biodiversity”, which is the latest euphemism for “scientific” racism. ”
Believing that democracy is a good way to run a country is a different view than believing that it’s an effective way to run an NGO. The idea that NGOs whose main funding comes from donors as opposed to membership dues should be run democratically seems like a fringe political idea and one that’s found in certain left-wing circles.
(Edited.)
This seems bordering on strawmanning. We should try to steelman their suggestions. It seems fine that some may be incompatible or all together would make us indistinguishable from the left (which I wouldn’t expect to happen anyway; we’d probably still care far more about impact than the left on average), since we wouldn’t necessarily implement them all or all in the same places, and there can be other ways to prevent issues.
Furthermore, overly focusing on specific suggestions can derail conversations too much into the details of those suggestions and issues with them over the problems in EA highlighted in the post. It can also discourage others from generating and exploring other proposals. It may be better to separate these discussions, and this one seems the more natural one to start with. This is similar to early broad cause area research for a cause (like 80,000 Hours profiles), which can then be followed by narrow intervention (and crucial consideration) research in various directions.
As a more specific example where I think your response borders on a strawman: in hiring non-EA experts and democratizing orgs, non-EAs won’t necessarily make up most of the org, and they are joining an org with a specific mission and set of values, so will often self-select for at least some degree of alignment, and may also be explicitly filtered during hiring for some degree of alignment. This org can remain an EA org. There is a risk that it won’t, but there are ways to mitigate such risks, e.g. requiring supermajorities for certain things, limiting the number of non-EAs, ensuring the non-EAs aren’t disproportionately aligned in any particular non-EA directions by hiring them to be ideologically diverse, retaining (or increasing) the power of a fairly EA-aligned board over the org so that it can step in if it strays too far from EA. There are also other ways to involve non-EA experts so that they wouldn’t get voting rights, e.g. as contractors or collaborators.
Or, indeed, some orgs should be democratized and others should hire more non-EA experts, but none need do both.
In a post this long, most people are probably going to find at least one thing they don’t like about it. I’m trying to approach this post as constructively as I can, i.e. “what I do find helpful here” rather than “how I can most effectively poke holes in this?” I think there’s enough merit in this post that the constructive approach will likely yield something positive for most people as well.
I like this comment.
I feel that EAs often have isolated demands for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) when it comes to criticisms.
I think the ideal way to read criticisms is to steelman as you read.
I don’t think it’s very surprising that 80% of the value comes from 20% of the proposed solutions.
I think this post would have been significantly better as a series, partly so people could focus on/vote on the parts independently.
There’s a fairly even mix of good-faith and bad-faith criticism here.
A lot of the good-faith criticism is almost a carbon copy of the winners of last year’s EA criticism contest.
First off, thank you to everyone who worked on this post. Although I don’t agree with everything in it, I really admire the passion and dedication that went into this work—and I regret that the authors feel the need to remain anonymous for fear of adverse consequences.
For background: I consider myself a moderate EA reformer—I actually have a draft post I’ve been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don’t have a ton of the “Sam” characteristics, and the only thing of value I’ve accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I’ve made sure I’d never get hired if I wanted to leave my non-EA law career?).
Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate—and not more rapid/extensive—reform. I’m firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much in the post for anyone to fully react to without several days of thinking and writing time, so I’ll hit on a few high points.
1. I think it’s critical to look at EA activity in the broader context of other actors working in the same cause area.[1] I suggest that much of EA’s value in spaces where it is a low-percentage funder is in taking a distinctive approach that zeroes in on the blind spots of bigger fish. [2] In other words, an approach that maximizes diversity of perspective within EA may not maximize the effective diversity of perspective in the cause area as a whole. Also, the important question is not necessarily how good EA’s epistemic tools would work in a vacuum with no one else in a given space.
In spaces where EA is a niche player, I am concerned that movements in the direction of looking like other actors may well be counterproductive. In addition to GH&D, I believe that EA remains a relatively small player in animal advocacy and even in fields like pandemic prevention compared to the total amount of resources in those areas.
2. I feel that the proposal is holding EA up to a much higher standard at certain points than comparable movements. World Vision doesn’t (to my knowledge) host a message board where everyone goes to discuss various decisions its leadership has made. I doubt the Gates Foundation devotes a percentage of its spend on hiring opposition researchers. And most major charitable funders don’t crowdsource major funding decisions. Other than certain meta spending (which raises some conflict-of-interest flags for me), I don’t see anything that justifies making demands of EA unless one is simultaneously making demands of similar outfits.
Given that a large portion of the authors’ critique is about learning from others outside EA, I think that the lack of many of their proposed reforms in many similarly-sized, mature charitable movements is a significant data point. Although I believe in more process, consultation, and “bureaucracy” than I think the median EA does, I think there has to be a recognition that these things incur significant costs as well.
3. Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”). It seems unlikely that much of classic EA would be left after at least radical democratization—there are likely to be a flood of incoming people, many with prior commitments, attracted by the ability to vote on how to spend $500MM of Uncle (Open) Phil’s money every year. Whoever controls the voter registration function would ultimately control the money.
Now, I think DA is a very interesting idea, and if I had a magic wand to slice off a tiny slice of each Western charitable spend and route it to a DA movement, I think that would more likely than not be net positive. I’m just not clear on why EA should feel obliged to be the ashes from which DA arises—or why EA’s funders should feel obliged to fund DA while all the other big-money interests get to keep their handpicked people making their funding-allocation decisions.
4. As I noted in a comment elsewhere on this thread, I don’t think the community has much leverage over its funders. Unfortunately, it is much easier to come up with interesting ideas than people who want to and can fund them. Especially Grade-A funders—the proposal suggests a rejection, or at least minimization, of various classes of less-desirable donors.
As a recent post here reminds us, “[o]nly the young and the saints are uncompromised.” There’s rarely a realistic, easy way to get large sums of money for one’s cause without becoming compromised to some extent in the process. There’s the traditional way of cultivating an army of small/mid-size donors, but that takes many years, and you end up spending lots of resources and energy on care and feeding of the donors instead of on getting stuff done. I suspect most movements will spend a lot of time waiting to launch, and seeking funding, if they will only take Grade-A donor money. That’s a massive tradeoff—I really value my bednets! -- and it’s not one I am personally desirous of making.
One final, more broadly conciliatory point: EA can be too focused on what happens under the EA brand name and can seem relatively less interested in empowering people to do good effectively outside the brand. It doesn’t have a monopoly on either effectiveness or altruism, and I’ve questioned (without getting much in the way of upvotes) whether it makes sense to have a unified EA movement at this point.
I like the idea of providing different options for people where they can do good as effectively as possible in light of their unique skills, passions, and interests. For some people, that’s going to be classic GiveWell-style EA (my own likely best fit), for others it is going to be something like the current meta, for yet others it’s going to be something like what is in this proposal, and there are doubtless many other potential flavors I haven’t thought about. Some people in the community are happy with the status quo; some people are not. The ideal might be to have spaces where everyone would be locally happy and effective, rather than try to preserve or reform the entire ecosystem into something one personally likes (but isn’t conducive to others).
For example, in Global Health & Development, you have a number of NGOs [e.g., World Vision at $1.2B is several times EA’s entire spend on GH&D; see generally here for a list of big US charities] plus the truly big fish like the Gates Foundation [$6.7B, although not all GH&D] and various governments. So the vast majority of this money is being moved through democratic processes, Gates-type subject-matter experts, and traditional charities—not through EA.
It’s scandalous to me that some of the opportunities GiveWell has found were not quickly swallowed up by the big fish.
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, “Democratic Altruism” to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger “Democratic Altruism” would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.
(That said, I imagine any name should come from the group advocating this vision)
Yeah, I would love to see people go out and try this experiment and I like the tag “democratic altruism”. There’s a chance that if people with this vision were to have their own space, then these tensions might ultimately dissipate.
[For context, I’m definitely in the social cluster of powerful EAs, though don’t have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven’t happened, and probably won’t happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren’t very good. And so:
people in EA roles where they could adopt these suggestions choose not to
and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.
And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You’ve laid out a long list of ways that you wish EA orgs behaved differently. You’ve also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies; I’ll refer to this group as “core org EAs” for brevity. But I don’t understand how you hope to cause EA orgs to change in these ways.
Maybe your ToC is that core org EAs read your post (and similar posts) and are intellectually persuaded by your suggestions, and adopt them.
If that’s your goal, I think you should try harder to understand why core org EAs currently don’t agree with your suggestions, and try to address their cruxes. For this ToC, “upvotes on the EA Forum” is a useless metric—all you should care about is persuading a few people who have already thought about this all a lot. I don’t think that your post here is very well optimized for this ToC.
(Note that this doesn’t mean that they think your suggestions aren’t net positive, but these people are extremely busy and have to choose to pursue only a tiny fraction of the good-seeming things (which are the best-seeming-to-them things) so demonstrating that something is net positive isn’t nearly enough.)
Also, if this is the ToC, I think your tone should be one of politely suggesting ways that someone might be able to do better work by their own lights. IMO, if some EA funder wants to fund an EA org to do things a particular way, you have no particular right to demand that they do things differently, you just have the ability to try to persuade them (and it’s their prerogative whether to listen).
For what it’s worth, I am very skeptical that this ToC will work. I personally think that this post is very unpersuasive, and I’d be very surprised if I changed my mind to agree with it in the next year, because I think the arguments it makes are weak (and I’ve been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
Maybe your ToC is that other EAs read your post and are persuaded by your suggestions, and then pressure the core org EAs to adopt some of your suggestions even though they disagree with them.
If so, you need to think about which ways EAs can actually apply pressure to the core org EAs. For example, as someone who prioritizes short-AI-timelines longtermist work over global health and development, I am not very incentivized to care about whether random GWWC members will stop associating with EA if EA orgs don’t change in some particular way. In contrast, if you convinced all the longtermist EAs that they should be very skeptical of working on longtermism until there was a redteaming process like the one you described, I’d feel seriously incentivized to work on that redteaming process. Right now, the people I want to hire mostly don’t agree with you that the redteaming process you named would be very informative; I encourage you to try to persuade them otherwise.
Also, I think you should just generally be scared that this strategy won’t work? You want core org EAs to change a bunch of things in a bunch of really specific ways, and I don’t think that you’re actually going to be able to apply pressure very accurately (for similar reasons that it’s hard for leaders of the environmentalist movement to cause very specific regulations to get passed).
(Note that I don’t think you should engage in uncooperative behavior (e.g. trying to set things up specifically so that EA orgs will experience damage unless they do a particular thing). I think it’s totally fair game to try to persuade people of things that are true because you think that that will cause those people to do better things by their own lights; I think it’s not fair game to try to persuade people of things because you want to force someone’s hand by damaging them. Happy to try to explain more about what I mean here if necessary; for what it’s worth I don’t think that this post advocates being uncooperative.)
Perhaps you think that the core org EAs think of themselves as having a duty to defer to self-identified EAs, and so if you can just persuade a majority of self-identified EAs, the core org EAs will dutifully adopt all the suggestions those self-identified EAs want.
I don’t think this is realistic–I don’t think that core EA orgs mostly think of themselves as executing on the community’s wishes, I think they (as they IMO should) think of themselves as trying to do as much good as possible (subject to the constraint of being honest and reliable etc).
I am somewhat sympathetic to the perspective that EA orgs have implied that they do think of themselves as trying to represent the will of the community, rather than just viewing the community as a vehicle via which they might accomplish some of their altruistic goals. Inasmuch as this is true, I think it’s bad behavior from these orgs. I personally try to be clear about this when I’m talking to people.
Maybe your ToC is that you’re going to start up a new set of EA orgs/projects yourself, and compete with current EA orgs on the marketplace of ideas for funding, talent, etc? (Or perhaps you hope that some reader of this post will be inspired to do this?)
I think it would be great if you did this and succeeded. I think you will fail, but inasmuch as I’m wrong it would be great if you proved me wrong, and I’d respect you for actively trying much more than I respect you for complaining that other people disagree with you.
If you wrote a post trying to persuade EA donors that they should, instead of other options, donate to an org that you started that will do many of the research projects you suggested here, I would think that it was cool and admirable that you’d done that.
For many of these suggestions, you wouldn’t even need to start orgs. E.g. you could organize/fundraise for research into “the circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought, by what criteria, and how this varies by subject/domain”.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
Maybe you don’t have any hope that anything will change, but you heuristically believe that it’s good anyway to write up lists of ways that you think other people are behaving suboptimally. For example, I have some sympathy for people who write op-eds complaining about ways that their government is making poor choices, even if they don’t have a detailed theory of change.
I think this is a fine thing to do, when you don’t have more productive ways to channel your energy. In the case of this post in particular, I feel like there are many more promising theories of change available, and I think I want to urge people who agree with it to pursue those.
Overall my main complaint about this post is that it feels like it’s fundamentally taking an unproductive stance–I feel like it’s sort of acting as if its goal is to persuade core EAs, but actually it’s just using that as an excuse to ineffectually complain or socially pressure; if it were trying to persuade, more attention would be paid to tradeoffs and cruxes. People sympathetic to the perspective in this post should either seriously attempt to persuade, or they should resort to doing things themselves instead of complaining when others don’t do those things.
(Another caveat on this comment: there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them.)
(In general, I love competition. For example, when I was on the EAIF I explicitly told some grantees that I thought that their goal should be to outcompete CEA, and I’ve told at least one person that I’d love it if they started an org that directly competes with my org.)
I strongly downvoted this response.
The response says that EA will not change “people in EA roles [will] … choose not to”, that making constructive critiques is a waste of time “[not a] productive ways to channel your energy” and that the critique should have been better “I wish that posts like this were clearer” “you should try harder” “[maybe try] politely suggesting”.
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.
Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at considering the issues raised.
I cannot think of a more dismissive or disheartening response. I think this response will actively dissuade future critiques of EA (I feel less inclined to try my had at critical writing seeing this as the top response) and as such make the community more insular and less epistemically robust. Also I think this response will make the authors of this post feel like their efforts are wasted an unheard.
I think this is a weird response to what Buck wrote. Buck also isn’t paid either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.
I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express ‘I disagree’, but ‘I don’t want to read this’.
Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be glad that Buck wrote his comment as you have better idea what people like him may think.
It’s important to understand the alternative to this comment is not Buck will write 30 page detailed response. The alternative is, in my guess, just silence.
Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I’m coming from.
From my perspective, it feels like the burden of making progress in EA is substantially on the people who actually have jobs where they try to make EA go better; my take is that EA leaders are making the correct prioritization decision by spending their “time for contemplating ways to improve EA” budget mostly on other things than “reading anonymous critical EA Forum posts and engaging deeply”.
I think part of my model is that it’s totally normal for online critiques of things to not be very interesting or good, while you seem to have a strong prior that online critiques are worth engaging with in depth. Like, idk, did you know that literally anyone can make an EA Forum account and start commenting and voting? Public internet forums are famously bad; why do you believe that this one is worth engaging extensively with?
I consider this a good outcome—I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).
I understand that my comment poses some risk of causing people who would have made useful criticisms feel discouraged from doing so. My current sense is that at the margin, this cost is smaller than the other benefits of my comment?
Remember that I thought that their efforts were already wasted and unheard (by anyone who will realistically do anything about them); don’t blame the messenger here. I recommend instead blaming all the people who upvoted this post and who could, if they wanted to, help to implement many of the shovel-ready suggestions in this post, but who will instead choose not to do that.
This was a disappointing comment to read from a well-respected researcher, and negatively updates me against encouraging people to working and collaborating with you in the future, because I think it reflects a callousness as well as insensitivity towards power dynamics which I would not want to see in a manager or someone running an AI alignment organization. In my opinion, it is fair game for me to make truthful comments that cause you to feel less incentivized to write comments like this in future (though I can imagine changing my mind on this).
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is “fair game”, or a constructive way to approach what you want to say. Here is my real response:
In terms of whether it is “fair game” or not: consider some junior EA who made a comment to you, “I would prefer an EA forum without your critical writing on it”. This has basically zero implications for you. No one is going to take them seriously, unless they provide receipts and point out what they disliked. But this isn’t the case in reverse. So I think if you are someone seen to be a “powerful EA”, or someone whose opinion is taken pretty seriously, you should take significant care when making statements like this, because some people might update simply based on your views. I haven’t engaged with much of weeatquince’s work, but EA is a sufficiently small enough community that these kinds of opinions can probably a harmful impact on someone’s involvement in EA-I don’t think the disclaimers around “I no longer do grantmaking for the EAIF” are particularly reassuring on this front. For example, I imagine if Holden came and made a comment in response to someone “I find your posts unhelpful, distracting, and unpleasant. I would prefer an EA forum without your critical writing on it”, this could lead to information cascades and reputational repercussions that don’t accurately reflect weeatquince’s actual quality of work. You are not Holden, but it would be reasonable for you to expect your opinions to have sway in the EA community.
FWIW, your comment will negatively update people towards posting under their main accounts, and I think a forum environment where people feel even more inclined to make alt accounts because they are worried about reputational repercussions from someone like you coming along with a comment like “I would prefer an EA Forum without your critical writing on it” is intimidating and not ideal for community engagement. Because you haven’t provided any justification for your claim aside from Robin’s post which points at strawmanning to some extent, I don’t know what this means for my work and whether my comments will pass your bar. Why not just let other users downvote low quality comments, and if you have a particular quality bar for posts that you think the downvotes don’t capture, just filter your frontpage so you only see posts with >50 or >100 karma? If you disagree with the way people running the forum are using the karma system, or their idea for who should post and what the signal:noise ratio should be, you should take that to the EA forum folks. Because if I was a new EA member, I’d be deleting my draft posts after reading a comment like this, and find it disconcerting that I’m being encouraged to post by the mods but might bump into senior EA members who say this about my good-faith contributions.
As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.
Thanks for your comment. I think your comment seems to me like it’s equivocating between two things: whether I negatively judge people for writing certain things, and whether I publicly say that I think certain content makes the EA Forum worse. In particular, I did the latter, but you’re worrying about the former.
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should, but for what it’s worth I am very quick to forgive and don’t hold long grudges. Also, it’s quite rare for me to update against someone substantially from a single piece of writing of theirs that I disliked. In general, I think people in EA worry too much about being judged negatively for saying things and underestimate how forgiving people are (especially if a year passes or if you say particularly reasonable things in the meantime).
@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of critical writing on the EA Forum and critiques of specific people (me or the OP author).
I agree! But given this, I think the two things you mention often feel highly correlated, and it’s hard for people to actually know that when you make a statement like that, that there’s no negative judgement either from you, nor from other readers of your statement. It also feels a bit weird to suggest there’s no negative judgement if you also think the forum is a better place without their critical writing?
I also agree with this, which is why I wanted to push back on your comment, because I think it would be understandable for someone to read your comment and worry more about being judged negatively, and if you think people are poorly calibrated, you should err on the side of giving people reasons to update in the right direction, instead of potentially exacerbating the misconception.
I think you and Buck are saying different things:
you are saying “people in EA should worry less about being judged negatively, because they won’t be judged negatively”,
Buck is saying “people in EA should worry less about being judged negatively, because it’s not so bad to be judged negatively”.
I think these points have opposite implications about whether to post judgemental comments, and about what impact a judgemental comment should have on you.
Oh interesting-I hadn’t noticed that interpretation, thanks for pointing it out. That being said I do think it’s much easier for someone in a more established senior position, who isn’t particularly at risk of bad outcomes from negative judgements, to suggest that negative judgements are not so bad or use that as a justification for making negative judgements.
I think this is somewhat unfair. I think it is unfair to describe this OP as “unpleasant”, it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history that was less well written, some of my critical writing was better received (like this). If you do find engaging with me to be unpleasant, I am sorry, I am open to feedback so feel free to send me a DM with constructive thoughts.
By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.
I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).
Thanks for your offer to receive critical feedback.
Thank you Buck that makes sense :-)
I think we very strongly disagree on this. I think critical posts like this have a very positive effect on discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content.
I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positive experiences of learning from good faith criticisms, and academic evidence that more views in decision making leading to better decisions. (I also think there have been some positive changes made as a result of recent criticism contests.)
I think it would be extremely hard to change my mind on this. I can think of a few specific cases (to support your views) where I am very glad criticisms were dismissed (e.g. the effective animal advocacy movement not truly engaging with abolitionist animal advocate arguments) but this seems to be more the exception than the norm. Maybe if my mind was changed on this it would be though more such case studies of people doing good really effectively without investing in the kind of learning that comes from well-meaning criticisms.
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better
I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
I thought Buck’s comment contained useful information, but was also impolite. I can see why people in favour of these proposals would find it frustrating to read.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I’d be pretty disappointed if the core EAs didn’t read this and take the ideas seriously (ive tried tagging dome on twitter), and if you’re correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I’d be concerned about where there was places for people to get their ideas taken seriously. I’m lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren’t so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?
I think that the class of arguments in this post deserve to be considered carefully, but I’m personally fine with having considered them in the past and decided that I’m unpersuaded by them, and I don’t think that “there is an EA Forum post with a lot of discussion” is a strong enough signal that I should take the time to re-evaluate a bunch—the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.
(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)
I’d be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don’t think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but I think your original replied has essentially tried to play the man and not the ball, and I would expect better from a self-identified ‘central EA’ (not saying this is some massive failing, and I’m sure I’ve done similar myself a few times)
I interpreted Buck’s comment differently. His comment reads to me, not so much like “playing the man,” and more like “telling the man that he might be better off playing a different game.” If someone doesn’t have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I’d guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.
Maybe your correct, and that’s definitely how I interpreted it initially, but Buck’s response to me gave a different impression. Maybe I’m wrong, but it just strikes me as a little strange if Buck feels they have considered these ideas and basically rejects them, why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better. Maybe I’m wrong or have misinterpreted something though, I wouldn’t be surprised
My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others’ thinking (“inasmuch as I’m wrong it would be great if you proved me wrong”). In other words, I’d guess he was like, “I think you’re probably mistaken, but in case you’re right, it’d be in both of our interests for you to convince me of that, and you’ll only be able to do that if you take a different approach.”
[Edit: This is less clear to me now—see Gideon’s reply pointing out a more recent comment.]
I guess I’m a bit skeptical of this, given that Buck has said this to weeatquince “I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future”.
This evidence doesn’t update me very much.
I interpret this quote to be saying, “this style of criticism — which seems to lack a ToC and especially fails to engage with the cruxes its critics have, which feels much closer to shouting into the void than making progress on existing disagreements — is bad for the forum discourse by my lights. And it’s fine for me to dissuade people from writing content which hurts discourse”
Buck’s top-level comment is gesturing at a “How to productively criticize EA via a forum post, according to Buck”, and I think it’s noble to explain this to somebody even if you don’t think their proposals are good. I think the discourse around the EA community and criticisms would be significantly better if everybody read Buck’s top level comment, and I plan on making it the reference I send to people on the topic.
Personally I disagree with many of the proposals in this post and I also wish the people writing it had a better ToC, especially one that helps make progress on the disagreement, e.g., by commissioning a research project to better understand a relevant consideration, or by steelmanning existing positions held by people like me, with the intent to identify the best arguments for both sides.
My interpretation of Buck’s comment is that he’s saying that, insofar as he’s read the post, he sees that it’s largely full of ideas that he’s specifically considered and dismissed in the past, although he is not confident that he’s correct in every particular.
You want him to explain why he dismissed them in the past
And are confused about why he’d encourage other people to champion the ideas he disagrees with
I think the explanation is that Buck is pretty pessimistic that these are by and large good ideas, enough not to commit more of his time to considering each one individually more than he has in the past. However, he sees that the authors are thinking about them a lot right now, and is inviting them to compete or collaborate effectively—to put these ideas to a real test of persuasion and execution. That seems far from “poo-poohing” to me. It’s a piece of thoughtful corrective feedback.
You have asked Buck to “lay out in depth” his reasons for rejecting all the content in this post. That seems like a big ask to me, particularly given that he does not think they are good ideas. It would be like asking an evolutionary biologist to “lay out in depth” their reasons for rejecting all the arguments in Of Pandas and People. Or, for a personal example, I went to the AAAS conference right before COVID hit, and got to enjoy the spectacle of a climate change denier getting up and asking in front of the ballroom whether the geoengineering scientist who’d been speaking whether scientists had considered the possibility that the Earth is warming up because it’s getting closer to the sun. His response was “YES WE’VE CONSIDERED IT.”
If that question asker went home, wrote a whole book full of reasons why the Earth might be moving closer to the sun, posted it online, and it got a bunch of upvotes, I don’t think that means that suddenly the scientist needs to consider all of the arguments more closely, revisit the issue, or that rejecting the ideas gives one an obligation to explain all of one’s reasons.
One way you could address this problem is by choosing one specific argument from this post that you find most compelling, and seeing if you can invite Buck into a debate on that topic, or to explain his thinking on it. I often find that to be productive of good conversation. But your comment read to me as an attempt to both mischaracterize the tone of Buck’s comment and and call into question the degree to which he’s thought about these issues. If you are accusing him of not actually having given these ideas as much thought as he claims, I think you should come right out and say it.
I agree with the text of your comment but think it’d be better if you chose your analogy to be about things that are more contested (rather than clearly false like creationism or AGW denial or whatever).
This avoids the connotation that Buck is clearly right to dismiss such criticisms.
One better analogy that comes to mind is asking Catholic theologians about the implausibility of a virgin birth, but unfortunately, I think religious connotations have their own problems.
I agree that this would have been better, but it was the example that came to mind and I’m going to trust readers to take it as a loose analogy, not a claim about which side is correct in the debate.
Fair! I think having maximally accurate analogies that helps people be truth-seeking is hard, and of course the opportunity costs of maximally cooperative writing is high.
I’m sympathetic to the position that it’s bad for me to just post meta-level takes without defending my object-level position.
Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.
I took the time to read through and post where I agree and disagree, however, I understand why people might not have wanted to spend the time given that the document didn’t really try to engage very hard with the reasons for not implementing these proposals. I feel bad saying that because the authors clearly put a lot of time and effort into it, but I honestly think it would have been better if the group had chosen a narrower scope and focused on making a persuasive argument for that. And then maybe worked on next section after that.
But who knows? There seems to be a bit of energy around this post, so maybe something comes out of this regardless.
I think you’re right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of “please, core EA orgs, start telling people that they should be different in these ways” rather than “here is my argument for why people should be different in these ways”).
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.
Obviously it’s the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I’m just noting that it seems unlikely to me that this post will actually persuade EA orgs to do things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.
If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn’t there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes.
As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.
I think Lark’s response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal “OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman”. This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.
I don’t think the post properly engages with the question “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”. I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn’t contribute much novel insight.
I encourage people to write posts on the topic of “how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with”; perhaps such posts could look at historical examples, or mechanisms via which powerful people can get the echo-chamber-reduction effects without the random-people-now-use-your-resources-to-do-their-random-goals effect.
Some things that I might come to regret about my comment:
I think it’s plausible that it’s bad for me to refer to disagreeing with arguments without explaining why.
I’ve realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I’m less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
I was not very transparent about my goal with this comment, which is generally a bad sign. My main goal was to argue that posts like this are a kind of unhealthy way of engaging with EA, and that readers should be more inclined to respond with “so why aren’t you doing anything” when they read such criticisms.
Fwiw I think there was an acknowledgement of soft power missing.
I strongly disagree with this response, and find it bizarre.
I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.
I agree with freedomandutility’s description of this as an “isolated demand for [something like] rigor”.
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
I think that’s particularly true of some of the calls for democratization. The Cynic’s Golden Rule (“He who has the gold, makes the rules”) has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren’t happy with the idea of random EAs spending their money, it just isn’t going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn’t be—someone is going to take the donor’s money in almost all cases, and there’s no EA High Council to somehow cast the rebel grantee from the movement.
Speaking as a moderate reform advocate, the flipside of this is that the EA community has to acknowledge the origin of power and not assume that the ecosystem is somehow immune to the Cynic’s Golden Rule. The people with power and influence in 2023 may (or may not) be wise and virtuous, but they are not in power (directly) because they are wise and virtuous. They have power and influence in large part because it has been granted to them by Moskovitz and Tuna (or their delegates, or by others with power to move funding and other resources). If Moskovitz and Tuna decided to fire Open Phil tomorrow and make all their spending decisions based on my personal recommendations, I would become immensely powerful and influential within EA irrespective of how wise and virtuous I may be. (If they are reading, this would be a terrible idea!!)
“If elites haven’t already thought of/decided to implement these ideas, they’re probably not very good. I won’t explain why. ”
“Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won’t explain why, but I will be patronising.”
“Meaningful organisational change comes from the top down, and you should be more polite in requesting it. I doubt it’ll do anything, though.”
Do you see any similarities between your response here and the problems highlighted by the original post, Buck?
The tone policing, dismissing criticism out of hand, lack of any real object-level engagement, pretending community responsibility doesn’t exist, and patronisingly trying to shut down others is exactly the kind of chilling effect that this post is drawing attention to.
The fact that a comment from a senior community member has led to deference from other community members, leading to it becoming the top-voted comment, is not a surprise. But support for such weak critiques (using vague dismissals that things are ‘likely net-negative’ or just stating his own opinion with little to no justifications) is pretty low, however.
And the wording is so patronising and impolite, too. What a perfect case study in the kinds of behaviours EA should no longer tolerate.
Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!
May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.
One irony is that it’s often not that hard to change EA orgs’ minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]
(CEA is actually doing basically this experiment soon, and I’m >2/3 chance the results will change the front page somehow, though obviously it’s hard to predict the results of experiments in advance.)
If anyone reading this actually wants to do this experiment please DM me – I have various ideas for what might be useful and it’s probably good to coordinate so we don’t duplicate work
Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc. People in different roles have different abilities (and limitations) in moving a reform effort forward.
I think “I didn’t walk away with a clear sense of what someone like me should do if I agree with much/all of your critique” is helpful/friendly feedback. I hesitant to even mention it because the authors have put so much (unpaid!) work into this post already, and I don’t want to burden them with what could feel like the expectation of even more work. But I think it’s still worth making the point for future reference if for no other reason.
I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.
I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
Can you clarify this statement? I’m confused about a couple of things:
Why is only “arguable” that you had more power when you were an active grantmaker?
Do you mean you don’t have much power, or that you don’t use much power?
I removed “arguable” from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn’t clearly mean I had “that much” power—e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending.
I mean that I don’t have much discretionary power (except inside Redwood). I can’t unilaterally make many choices about e.g. EA resource allocation. Most of my influence comes via arguing that other people should do things with discretionary power that they have. If other people decided to stop listening to me or funding me, I wouldn’t have much recourse.
I appreciate the clarification!
It sounds to me that what you’re saying is that you don’t have any formal power over non-Redwood decisions, and most of your power comes from your ability to influence people. Furthermore, this power can be taken away from you without you having any choice in the matter. That seems fair enough. But then you seem to believe that this means you don’t actually have much power? That seems wrong to me. Am I misunderstanding something?
I agree the we ignore experts over people who are more value aligned. Seems like a mistake.
As a weak counter-point to this, I have found in the past that experts who are not value-aligned can almost find EA ways of thinking incomprehensible, such that it can be very difficult to extract useful information from them. I have experienced talking to a whole series of non-EA experts and really struggling to get them to even engage with the questions I was asking (a lot of “I just haven’t thought about that”), whereas I got a ton of value very quickly from talking to an EA grad student in the area.
I empathise with this from my own experience having been quite actively involved in EA for 10 years and within my own area of expertise which is finance and investment, risk management and to a lesser extent governance ( as a senior partner and risk committee member of one the largest hedge fund in Europe), that sometimes we ignore ‘experts’ over people who are more value aligned.
It doesn’t mean I believe we should always defer to ‘experts’. Sometimes a fresh perspective is useful to explore and maximise potential upside , but sometimes ‘experts’ are useful in minimising downside risks that people with less experience may not be aware of, and also save time and effort in reinventing existing best practises upon which improvements could be made.
I guess it is a balance between the two which varies with the context, but more likely perhaps in areas such as operation, legal and compliance, financial risk management and probably others.
I’m doing a project on how we should study xrisk, and I’d love to talk to you about your risk management work etc. Would you be up for a call?
Hi Gideon, do you mean me? I have very very little detailed knowledge of xrisk and do not believe my risk management expertise would be relevant. But happy to chat. May be you can pm me?
Sure!
More broadly I often think a good way to test if we are right is if we can convince others. If we can’t that’s kind of a red flag in itself.
This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There’s an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.
EA & Aspiring Rationalism have grown fairly rapidly, all told! But they’re also fairly new. “Experts in related fields haven’t thought much about EA approaches” is more promising than “experts in related fields have thought a lot about EA approaches and have standard reasons to reject them.”
(Although “most experts have clear reasons to reject EA thinking on their subject matter” is closer to being the case in AI … but that’s probably also the field with the most support for longtermist & x-risk type thinking & where it’s seen the fastest growth, IDK.)
We sort of seem to be doing the opposite to me—see for example some of the logic behind this post and some of the comments on it (though I like the post and think it’s useful).
Agree that it is a red flag. However, I also think that sometimes we have to bite the bullet on this.
Only a small red flag, IMO, because it’s rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.
This seems quite hand-wavy and I’m skeptical of it. Could you give an example where “we” have ignored the experts? And when you say experts, you probably refer to expert reasoning or scientific consensus and not appeals to authority.
Your statement gained a lot of upvotes but “EA ignores expoerts” just fits the prevailing narrative too well but I haven’t seen any examples of it. Happy to update if I find one.
For some related context: In the past GiveWell used to solicit external reviews by experts of their work, but has since discontinued the practice. Some of their reasons are (I can imagine similar reasons applying to other orgs):
“There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s.”
“Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come.”
I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.
I generally/directionally agree, and also wrote about a closely related concern previously: https://forum.effectivealtruism.org/posts/tdaoybbjvEAXukiaW/what-are-your-main-reservations-about-identifying-as-an?commentId=GB8yfzi8ztvr3c6DC
Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.
I do have the sense it maybe proves too much—i.e. if these critiques are all correct then I think it’s surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.
I don’t see you doing much acknowledging what might be good about the stuff that you critique—for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seems possible to me that the movement’s early focus on individual rationality was the cause of attracting great people into the movement, and that without that focus EA might not be anything at all! If I’m right about that then are we ready to give up on whatever power we gained from making that choice early on?
Or, as a metaphor, you might be saying something like “EA needs to ‘grow up’ now” but I am wondering if EA’s childlike nature is part of its success and ‘growing up’ would actually have a chance to kill the movement.
“I don’t see you doing much acknowledging what might be good about the stuff that you critique”
I don’t think it’s important for criticisms to do this.
I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.
Criticisms don’t have to do this, but it would be more persuasive if it did.
I agree but having written long criticisms of EA, doing this consistently can make the writing annoyingly long-winded.
I think it’s better for EAs to be steelmanning criticisms as they read, especially via “would I agree with a weaker version of this claim” and via the reversal test, than for writers to explore trade-offs for every proposed imperfection in EA.
Agreed. When people require literally everything to be written in the same place by the same author/small-group, it disincentives writing potentially important posts.
“I do have the sense it maybe proves too much—i.e. if these critiques are all correct then I think it’s surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup”
Agreed. Chesterton’s fence applies here.
In what ways is EA very successful? Especially if you go outside the area of global health?
hm, at a minimum: moving lots of money, and making a big impact on the discussion around ai risk, and probably also making a pretty big impact on animal welfare advocacy.
My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.
I’m glad to hear that. I’ve been very happy about the successes of animal advocacy, but hadn’t imagined EA had such a counterfactual impact in it.
To be clear, from my perspective what I said is moderate but not strong evidence that EA is counterfactual for said wins. I don’t know enough about the details to be particularly confident.
A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here. The OP is not saying that the norms of EA are worse than those organisations, just that they’re not as good as they could be.
Are we at all sure that these have had, or will have, a positive impact?
We should absolutely not be sure, for example because the discussion around AI risk up to date has probably accelerated rather than decelerated AI timelines. I’m most keen on seeing empirical work around figuring out whether longtermist EA has been net positive so far (and a bird’s eye, outside view, analysis of whether we’re expected to be positive in the future). Most of the procedural criticisms and scandals are less important in comparison.
Relevant thoughts here include self-effacing ethical theories and Nuño’s comment here.
Where I agree:
Experimentation with decentralised funding is good. I feel it’s a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
More engagement with experts. Obviously, this trades off against other things and it’s easier to engage with experts when you have money to pay them for consultations, but I’m sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were the right funding mechanism for the vast majority of funds rather than a supplement).
Where I disagree:
I suspect it would be a mistake for EA to shift too much towards always just adopting the expert consensus. As EAs we need to back ourselves, but without becoming overconfident. If EA’s had just deferred to the consensus of development studies experts, EA wouldn’t have gotten off the ground. If EA’s had just deferred to the most experienced animal advocates, that would have biased us towards the wrong interventions. If EA’s had just deferred to ML researchers, we would have skipped over AI Safety as a cause area.
I don’t think EA is too focused on AI safety. In fact, I suspect that in a few years, we’ll probably feel that we underinvested in it given how fast it’s developing.
I see value-alignment as incredibly important for a movement that actually wants to get things done, rather than being pulled in several different directions. I agree that it comes with significant risks, such as those you’ve identified, however, I think that we just have to trust in our ability to navigate those risks.
I agree that we need to seek critiques beyond what the existing red-teaming competition and cause exploration prizes have produced, although I’m less of a fan of your specific proposals. My ideal proposal would be to get a few teams of smart, young EA’s who already have a strong understanding of why things are the way that they are in EA and give them a grant to spend time thinking about how they would construct the norms and institutions of EA if they were building them from the ground up. Movements tend to advance by having the youth break with tradition, so I would favour accelerating this natural process over the suggestions presented.
While I would love to see EA institutions being able to achieve a broader base of funding, this feels more like something that would be nice to have, rather than something that you should risk disrupting your operations over.
Voting isn’t a panacea. Countries have a natural answer to who gets to vote—every citizen. I can’t see open internet polls as a good idea due to how easily they can be manipulated, so we’d then require a definition of a member. This would require either membership fees or recording attendance at EA events, so there would be a lot of complexity in making this work.
I think I basically agree here, and I think it’s mostly about a balance; criticism should, I think, be seen as pulling in a direction rather than wanting to go all the way to an extreme (although there definitely are people who want that extreme who I strongly disagree with!) On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities). I think I agree the post sees voting/epistemic democracy in a too rosy eyed way. On the other hand, I am aware of being told by a philosopher of science I know that xrisk was the most hierarchical field they’d seen. Moreover, I think democracy can come in gradations, and I don’t think ea will ever be perfect. On your thing of youth, I think that’s interesting. I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism, or having had themselves sidelined by people with more power who disagree, or had the credit for their achievements taken by people more senior making it harder for them to have legitimacy to push for change etc. This is why I like the cultural points this post makes, as it does seem we need a better culture to achieve our ideals
“On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities)”—I agree that this was a mistake.
“I’m not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism”—that’s worrying. Hopefully seeing this post be highly upvoted makes people feel less scared.
Given that EA Global is already having an application process that does some filtering you likely could use the attendance lists.
Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim—I think you made a solid comment.
Critique: I have a great sense of caution around the belief that “smart, young EAs”, and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn’t seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I’m sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter.
Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy “membership”. My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work. Edit: It seems others have thought about this a lot more than I have, and it seems intractable.
I don’t see my suggestion of getting a few groups of smart, young, EAS as exclusive with engaging with experts.
Obviously they trade off in terms of funds and organiser effort, but it wouldn’t actually be that expensive to pay the basic living expenses of a few young people.
If 100% of these suggestions were implemented I would expect in 5 years’ time EA to look significantly worse (less effective, helping less people/animals and possibly having more FTX type scandals).
If the best 10% were implemented I could imagine that being an improvement.
Possibly high effort, but what do you see as the best 10% (and worst 10%)?
I like this comment and I think this is the best way to be reading EA criticisms—essentially steelmanning as you read and not rejecting the whole critique because parts seem wrong.
Especially because bad-faith actors in EA have a documented history of spending large amounts of time and effort posing as good-faith actors, including heavy use of anonymous sockpuppet accounts.
I appreciate the large effort put into this post! But I wanted to throw out one small part that made me distrust it as a whole. I’m a US PhD in cognitive science, and I think it’d be hard to find a top cognitive scientist in the country (e.g., say, who regularly gets large science grants from governmental science funding, gives keynote talks at top conferences, publishes in top journals, etc.) who takes Iain McGilchrist seriously as a scientist, at least in the “The Master & His Emissary” book. So citing him as an example of an expert whose findings are not being taken seriously makes me worry that you handpicked a person you like, without evaluating the science behind his claims (or without checking “expert consensus”). Which I think reflects the problems that arise when you start trying to be like “we need to weigh together different perspectives”. There’s no easy heuristics for differentiating good science/reasoning from pseudoscience, without incisive, personal inquiry—which is, as far as I’ve seen, what EA culture earnestly tries to do. (Like, do we give weight to the perspective of ESP people? If not, how do we differentiate them from the types of “domain experts” we should take seriously?)
I know this was only one small part of the post, and doesn’t necessarily reflect the other parts—but to avoid a kind of Gell-Mann Amnesia, I wanted to comment on the one part I could contribute to.
I think to some extent this is fair. This strikes me as a post put together by non-experts, so I wouldn’t be surprised if there are aspects of the post that is wrong. I think the approach I’ve taken is to have this is a list of possible criticisms, but probably that contains a number of issues. The idea is to steelman the important ones and reject the one’s we have reason to reject, rather than reject the whole. I think its fair to have more scepticism though, and I certainly would have liked a fuller bibliography, with experts on every area weighing in, but I suspect that the ‘ConcernedEAs’ probably didn’t have the capacity for this.
I agree with all that! I think my worry is that this one issue reflects the deep, general problem that it’s extremely hard to figure out what’s true, and relatively simple and commonly-suggested approaches like ‘read more people who have studied this issue’, ‘defer more to domain-experts’, ‘be more intellectually humble and incorporate a broader range of perspectives’ don’t actually solve this deep problem (all those approaches will lead you to cite people like McGilchrist).
Yes I think this is somewhat true, but I think that this is better than the status quo of EA at the moment.
One thing to do, which I am trying to do, is actually get more domain experts involved in things around EA, and talk to them more about how this stuff works, rather than deferring to anonymous ConcernedEAs or to a small group of very powerful EAs on this, but rather actually try and build a diverse epistemic community with many perspectives involved, which is what I interpret as the core claim of this manifesto
Thanks a lot for writing this detailed and thoughtful post, I really appreciate the time you spent on putting this information and thinking together.
So, let’s assume I am a ‘leader’ in the EA community being involved in some of the centralised decision-making you are talking about (which might or might not be true). I’m very busy but came across this post, it seemed relevant enough and I spent maybe a bit less than an hour skim-ish reading it. I agree with the vast majority of the object-level points you make. I didn’t really have time to think about any of the concrete proposals you are making and there are a lot of them, so it seems unlikely I will be able to find the time. However, since—as I said—I broadly agree with lots of what you’re saying I might be interested in supporting your ideas. What, concretely do you want me to do tomorrow? Next week?
Thank you so much for your response, DM’d!
It would have been nice to see a public response here!
Especially given all the stuff you just wrote about how EA is too opaque, insular, unaccountable etc. But mainly just because I, as a random observer, am extremely curious what your object-level answer to the question they posed is.
Very fair: DMing for the sake of the anonymity of both parties.
I did a close read of “Epistemic health is a community issue.” The part I think is most important that you’re underemphasizing is that, according to the source you cite, “The diversity referred to here is diversity in knowledge and cognitive models,” not, as you have written, diversity “across essentially all dimensions.” In other words, for collective intelligence, we need to pick people with diverse knowledge and cognitive models relevant to the task at hand, such as having relevant but distinct professional backgrounds. For example, if you’re designing a better malaria net, you might want both a materials scientist and an epidemiologist, not two materials scientists.
Age and cultural background might be relevant in some cases, but that really depends on what you’re working on any why these demographic categories seem especially pertinent to the task at hand. If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might want to work with people from the specific villages where they will be distributed.
As your own source says:
EA institutions are struggling to find people who are both value-aligned and have relevant diverse cognitive models and knowledge, both of which are prerequisites for collective intelligence. This is a natural problem for EA institutions to have, so you need to explain why EA is making this tradeoff of value alignment vs. cognitive/knowledge diversity suboptimally for your critique to bite.
Where your post and the collective intelligence research you’re basing it off seem to diverge is that while you want EA to select for diversity across all dimensions, perhaps for its own sake, the CI research you cited argues that you need to select for forms of cognitive models and knowledge relevant to the task at hand. 80,000 Hours might be wrong in ignoring humanities and social science backgrounds, or those without a university education, but I think your argument would be much stronger if you articulated something specific about what those disciplines would bring to the table.
Whole disciplines exist that are overwhelmingly not value-aligned with EA. Making an effort to include them seems to me like it would add a skillset of questionable task-relevant value while creating fundamental and destructive internal conflict. Because EA does have a specific set of theses on what constitutes “effectiveness” and “altruism,” which we can try to define in the abstract but which can perhaps be better articulated by the concrete types of interventions we tend to support, such as global health and X risk issues. Not everybody supports those kinds of projects, or at least not prioritizing them as highly as we do, or using the kinds of underlying models we use to prioritize them (i.e. the ITN framework), and if they are that far removed from the mission of our movement, then we should probably not try to include them in it.
And if you’re trying to run a movement dedicated to improving the entire world? Which is what we are doing?
That is a fair rebuttal.
I would come back to the model of a value-aligned group with a specific set of tasks seeking to maximize its effectiveness at achieving the objective. This is the basis for the collective intelligence research that is cited here as the basis for their recommendations for greater diversity.
If you frame EA as a single group trying to achieve the task of “make the entire world better for all human beings by implementing high-leverage interventions” then it does seem relevant to get input from a diverse cross-section of humanity about what they consider to be their biggest problems and how proposed solutions would play out.
One way to get that feedback is to directly include a demographically representative sample of humanity in EA directly as active participants. I have no problem with that outcome. I just think we can 80⁄20 it by seeking feedback on specific proposals.
I also think that basing our decisions about what to pursue based on the personal opinions of a representative sample of humanity will lead us to prioritize the selfish small issues of a powerful majority over the enormous issues faced by underrepresented minorities, such as animals, the global poor, and the denizens of the far future. I think this because I think that the vast majority of humanity is not value-aligned with the principle of altruistic utility maximization.
For these two main reasons—the ability to seek feedback from relevant demographics when necessary, and the value mismatch between EA and humanity in general—I do not see the case for us being unable to operate effectively given our current demographic makeup. I do think that additional diversity might help. I just think that it is one of a range of interventions, it’s not obvious to me that it’s the most pressing priority, and broadening EA risks to pursue diversity purely for its own sake risks value misalignment with newcomers. Please interpret this in a moderate stance along the lines of “I invite diversity, I just think it’s not the magic solution to fix all of EA’s problems with effectiveness and the important thing is ‘who does EA talk to’ more than ‘who calls themselves an EA’.”
Hi AllAmericanBreakfast,
The other points (age, cultural background, etc.) are in the Critchlow book, linked just after the paper you mention.
Where exactly is that link? I looked at the rest of the links in the section and don’t see it.
The word before the Yang & Sandberg link
This is the phrase where you introduce the Yang & Sandberg link:
The word before the link is “community,” which does not contain a link.
“For”
OOOOHHHHHHHH
Yeah, this kind of multiple-links approach doesn’t work well in this forum, since there’s no way to see that the links are separate.
I’d recommend separating links that are in neighbouring words (e.g. see here and here).
As other comments have noted, a lot of the proposals seem to be bottlenecked by funding and/or people leading them.
I would recommend people interested in these things to strongly consider Earning to Give or fundraising, or even just actually donating much more of their existing income or wealth.
If the 10 authors of this post can find other 10 people sympathetic to their causes, and each donate or fundraise on average 50k/year, they would have $1M/year of funding for causes that they think are even better than the ones currently funded by EA! If they get better results than existing EA funds people and resources would flock to them!
If you think the current funding allocation is bad, the value of extra funding that you would be able to allocate better becomes much higher.
Especially if you want to work on climate change, I suspect fundraising would be easier than for any other global cause area. Instead of asking Moskovitz/Openphil for funding, it might be even higher EV to ask Gates, Bezos, or other billionaires. Anecdotally, when I talk to high net-worth people about EA, the first comment is almost always “but what about climate change, which clearly is the most important thing to fund?”
I disagree with a lot of of the content and the general vagueness/EA Should of this post, but I appreciate the huge effort that went into it.
After reading most of the post, as a person that is in EA to give money/support, it’s not clear to me how can I help you.
This piece is pretty long and I didn’t think I’d like it, but I put that aside. I think it’s pretty good with many suggestions I agree with.
Thanks for writing it. I guess it wasn’t easy, I know it’s hard to wrestle with communities you both love and disagree with. Thanks for taking the time and energy to write this.
On democratic control:
Any kind of democratic control that tries to have “EAs at large” make decisions will need to decide on who will get to vote. None of the ways I can think of for deciding seem very good to me (donating a certain amount? having engaged a certain amount in a visible way?). I think they’re both bad as methods to choose a group of decisionmakers and more broadly harmful. “You have done X so now you are A Real EA” is the message that will be sent to some and “Sorry, you haven’t done X, so you’re not A Real EA” to others, regardless of the method used for voter selection. I expect that it will become a distraction or discouragement from the actual real work of altruism.
I also worry that this discussion is importing too much of our intuitions about political control of countries. Like most people who live in democracies, I have a lot of intuitions about why democracy is good for me. I’d put them into two categories:
Democracy is good for me because I am a better decisionmaker about myself than other people are about me
Most of this is a feeling that I know best about myself: I have the local knowledge that I need to make decisions about how I am ruled
But other parts of it are procedural: I think that when other people decide on my behalf, they’ll arrange things in their own favor
Democracy is good for me because it’s deontologically wrong for other people to rule me
I don’t think either of those categories really apply here. EA is not about me, is not for me, does not rule me, and should not take my own desires as a Sam into account any more than it does, probably.
I wasn’t planning on commenting, but since you addressed me by name, I felt compelled to respond.
Democratization changes the relative power distribution within EA. The people proposing it are usually power-seeking in some way and already have plans to capitalize off of a democratic shift.
Strongly agree with the idea that we should stop saying “EA loves criticism”.
I think everyone should have a very strong prior that they are bad at accepting criticism, and everyone should have a very strong prior that they overestimate how good they are at accepting criticism.
I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).
I don’t think I like this framing, because being responsive to criticism isn’t inherently good, because criticism isn’t always correct. I think EA is bad at the important middle step between inviting criticism and being responsive to it, which is seriously engaging with criticism.
Agree, I don’t see many “top-ranking” or “core” EAs writing exhaustive critiques (posts, not just comments!) of these critiques. (OK, they would likely complain that they have better things to do with their time, and they often do, but I have trouble recalling any aside from (debatably) some of the responses to AGI Ruins / Death With Dignity.)
As was said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk. There are also examples of the orthodoxy changing due to core EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, towards more policy.
Yessssssssss
Uh do we? My sense is that democracies are often slow and that EA expert consensus has lead rather than followed the democractic consensus over time. I might instead say “democractic decisions avoid awful outcomes”. I’m not a big reader of papers, but my sense is that democracies avoid wars and famines but also waste a lot of time debating tax policy. I might suggest that EA should feel obliged to explain things to members but that members shouldn’t vote.
This on the other hand I do agree with—we could easily have a better sense of what the community thinks, what research could be done etc etc. I guess I’ve run more polis polls than anyone else, but it’s clunks and doesn’t present a clear “what next”.
I talked to the builders of pol.is over the weekend about this, because we were at a conference on collective decisionmaking and Vitalik Buterin funded my dev agency to try and build pol.is for manifold markets and forum magnun (if the forum team accept). So early days but we could see easier use here.
On Democratic Proposals—I think that more “Decision making based on democratic principles” is a good way of managing situations where power is distributed. In general, I think of democracy as “how to distribute power among a bunch of people”.
I’m much less convinced about it as a straightforward tool of better decision making.
I think things like Deliberative Democracy are interesting, but I don’t feel like I’ve seen many successes.
I know of very little use of these methods in startups, hedge funds, and other organizations that are generally incentivized to use the best decision making techniques.
To be clear, I’d still be interested in more experimentation around Deliberative Democracy methods for decision quality, it’s just that the area still seems very young and experimental to me.
Hi Ozzie, while I agree it’s true that there aren’t many high-performing organizations which use democratic decision making. I believe Bridgwater Associates, the largest hedge fund in the world, does use such a system. They use a tool called the dot collector to gather real time input from a wide base of employees and use that to come up with a ‘believability weighted majority’. The founder of the company Ray Dalio has said that he will generally defer to this vote even when he himself does not agree with the result. https://www.principles.com/principles/3290232e-6bca-4585-a4f6-66874aefce30/
So not as democratic as 1 person 1 vote but far more egalitarian than the average company (or EA for that matter).
Hi Ozzie,
Participedia is a great starting point for examples/success stories, as well as the RSA speech we linked.
Also this: https://direct.mit.edu/daed/article/146/3/28/27148/Twelve-Key-Findings-in-Deliberative-Democracy
And this: https://forum.effectivealtruism.org/posts/kCkd9Mia2EmbZ3A9c/deliberation-may-improve-decision-making
Thanks!
Hi Nathan,
If you’re interested in the performance of democratic decision-making methods then the Democratic Reason book is probably the best place to start!
Without wanting to play this entire post out in miniature, you’re telling me something I think probably isn’t true and then suggesting I read an entire book. I doubt I’m gonna do that.
https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_should_probably_read_more_widely
You generally read books to understand a thesis in more detail. If there would be a few examples of notable organizations that used democratic decision-making to great effect and someone would want to learn from that, reading a book that gives more details is a great idea. Reading a book to see whether or not a thesis deserves more attention on the other hand makes less sense.
Just a small point here—quite a few of the links/citations in this post are to academic texts which are very expensive [1] (or cumulatively expensive, if you want to read more than a couple) unless you have access through a university/institution. While blogposts/google docs may have less rigour and review than academic papers, their comparative advantage is the speed with which they can be produced and iterated on.
If anything, developing some of the critiques above in more accessible blogposts would probably give more ‘social proof’ that EA views are more heterodox than it might seem at first (David Thorstad’s blog is a great example you link in this post). Though I do accept that current community culture may mean many people are, sadly but understandably, reluctant to do so openly.
This is just my impression after a quick first read, and could be unrepresentative. I definitely intend to read this post again in a lot more detail, and thanks again for the effort that you put into this.
Putting numbers on things is good.
We already do it internally, and surfacing that allows us to see what we already think. Though I agree that we can treat made up numbers too seriously. A number isn’t more accurate than “high likely” it’s more precise. They can both just as easily be mistaken.
I have time for people who say that quantifying feels exhausting or arrogant but I think those are the costs, to be weighed against the precision of using numbers.
As I wrote in another post, I support the use of numbers, but it’s clear to me that some EAs think that quantifying something automatically reduces uncertainty / bias / motivated reasoning.
Agree that the main benefit of quantification is precision, not accuracy.
Precision is only sometimes warranted though. For the same reason that in science we never report numbers to a higher precision than that we can actually measure, it is misleading to quantify things when you actually have no idea what the numbers are.
I disagree. I think words are often just as bad for this. So it’s not the fault of quantification but an issue with communication in general.
Good point!
In the self-evaluation of their mistakes, the Intelligence community in the US came to the conclusion that lack of quantification of the likelihood that Saddam didn’t have WMDs was one of the reasons they messed up.
This led to forecasting tournaments which inturn lead to Tetlock’s superforcasting. I think the orthodox view in EA is that Tetlock’s work is valuable and we should apply its insights.
Precisely!
The downvotes to comments like this are also bad practice IMO, separate from every other cultural practice raised in the discussion.
You can only numerically compare things that are linearly ordered. 2 is obviously bigger than 1. But you cannot say which of two lists of random numbers is bigger than the other, you would have to define your own set of comparison rules out of possibly infinite rules to compare them and say which is bigger.
Suppose you have 2 apples, one is bigger and has more calories, but the other has higher nutrient density. How do you declare which is the better apple? At best the answer is contextual and at worst it’s impossible to solve. You cannot recommend someone eat one apple over another without making qualitative decisions about their health. Even if you tried to put a number on their health, the problem would recursively cascade down to infinity. They may need more energy to get through the day and so should go for the bigger apple, but why prefer short-term energy over long-term nutritional health? Why prefer the short term over the long term in general? These are necessarily qualitative decisions that utilitarianism cannot solve, because you mathematically cannot decide between non-linearly ordered objects without forming some subjective, qualitative philosophy first.
So when we say ‘you can’t put a number on everything’, it isn’t just a platitude, it’s a fact of the universe, and denying that is like denying gravity.
I don’t understand this comment. People assign a number to consumer choices all the time, for example via the process of buying and selling things.
Now you can say prices are imperfect because of distributional concerns. But that is a specific tactical issue. Even after complete wealth redistribution, I expect market allocation of apples to be better than qualitative-philosophy-based allocation. But maybe this is a strawman and you’re only comparing two forms of technocratic distribution from on high (which is closer to EA decisions, chickens don’t participate in markets about their welfare)? But even then numerical reasoning just seems much better for allocation than non-numerical ones. Specifically I would guess the distribution to look like markets > technocratic shadow markets > AI technocracy with ML optimized for preference elicitation > humans trying to do technocracy with numbers > humans trying to do technocracy without numbers.
This might just be my lack of physics knowledge speaking, but I think the ability to quantify the world is much more native to my experience than gravity is. Certainly it’s easier to imagine a universe without gravity than a universe where it’s impossible to assign numbers to some things.
(I think it’s reasonably likely I’m missing something, since this comment has upvotes and agreement and after several rereadings I still don’t get it).
I think this is a nitpick. In context, it’s not like I am arguing for saying “GiveWell is 6” I am arguing that “I’m 90% sure that GiveWell does fantastic work” is a reasonable thing to say. That provides room for a roughly linear ordering.
Wait… don’t all consequentialist normative ethical theories have a subjective qualitative philosophy to them? - namely what they hold to be valuable (otherwise I’m not sure what you mean by “subjective qualitative” here at all. A Google search gives me nothing for “subjective qualitative philosophy”).
Utilitarianism values happiness so whichever apple consumption leads to more happiness and well-being is recommended
Mohism values state welfare so whichever apple consumption leads leads to better state welfare is recommended
In Christian Situational Ethics values love so whichever apple consumption leads to more love in the world is recommended
Intellectualism values knowledge whichever apple consumption leads to more knowledge is recommended
Welfarism values economic well-being whichever apple consumption leads to more economic well-being or welfare is recommended
Preference utilitarianism values preference satisfaction whichever apple consumption leads to the most overall preference satisfaction is recommended
Utilitarianism is not and never has been just putting numbers on things. The numbers used are just instrumental to the end-goal of increasing whatever is valued. You might say “you can’t put a number on happiness” to which I say we have proxies and the numerical value of said proxies (e.g. calories and nutrient density[1]), when clearly reasoned on with available evidence, are useful to give us a clearer picture of what actions lead more to the end-goal of happiness maximization.
I kinda wanna push back here against what feels like a bizarre caricature stereotype of what it must mean to be a Utilitarian. You can be a diehard Utilitarian—live and abide by it—and do zero math, zero scary numbers, zero quantitative reasoning your whole life. All you do is vigorously try to increase happiness based on whatever qualitative reasoning you have to the best of your abilities. That and I suppose iterate on empirical evidence—which doesn’t have to include using numbers.
Useful numbers placed on virtually all food items—which like most EA numbers are estimations. But they are still nonetheless useful if said numbers can be reasonably interpreted as good proxies or correlated with what you value. i.e. they imperfectly provide us roughly linear ordering.
You should probably take out the claim that FLI offered 100k to a neo nazi group as it doesn’t seem to be true
Thank you so much for writing this! I don’t have much of substance to add, but this is a great post and I agree with pretty much everything.
So I think this is true of Sam, and ourselves, but I’m really convinced that Dustin defers to OpenPhil more than the other way around (see below). I guess I like Dustin so feel loyalty to him that biases me.
I guess I am wary that I work on things I think are cool and interesting to me. Seems convenient.
I guess my main disagreement with this piece is that I think core EAs do a pretty good job:
Decisions generally seem pretty good to me (though I guess I am pretty close to the white-male etc artchitype, but still). I think that that community decisionmaking wouldn’t have fixed FTX, the FLI issue or the bostrom email. In fact the criticism of the CEA statement is more from the white-male EA, I guess
You want people empowered to take quick decisions who can be trusted based on their track record.
I wish there was more understanding of community opinion and discussing with the community. I have long argued for a kind of virtual copy of many EA orgs that the community can discuss and criticise.
For the FLI issue, I think we can confidently say more democratic decision making would have helped. Most EAs would have probably thought we should avoid touching a neo Nazi newspaper with a 10 foot pole.
Oh okay, but grant level democratisation is a huge step.
On the other hand, more decentralized grantmaking, i.e. giving money to more individuals to regrant, increases the risks of individuals funding really bad things (unilateralist curse). I suppose we could do something like give regranters money to regrant and allow a group of people (maybe all grantmakers and regranters together) to vote to veto grants, say with a limited number of veto votes per round, and possibly after flagging specific grants and debating them. However, this will increase the required grantmaking-related work, possibly significantly.
I think some sort of community-involved decisionmaking could have reduced the risk of FLI. The community involvement could be on the process side instead of, or in addition to, the substantive side. Although there hasn’t been any answer on how much of a role the family member of FLI’s president played in the grant, the community could have pushed for adoption of strong rules surrounding conflict of interest.
Another model would be a veto jury empowered to nix tentatively approved grants that seemed poorly justified, too risky for expected benefit, or otherwise problematic. Even if the veto jury had missed the whole neo-nazi business, it very likely would have thrown this grant out for being terrible for other reasons.
I think that would hugely slow down the process.
I don’t think that is necessarily correct. We know the FLI grant was “approved” by September 7 and walked back sometime in November, so there was time for a veto-jury process without delaying FLI’s normal business process.
I am ordinarily envisioning a fairly limited role for a veto jury—the review would basically be for what US lawyers call “abuse of discretion” and might be called in less technical jargon a somewhat enhanced sanity check. Cf. Harman v. Apfel, 211 F.3d 1172, 1175 (9th Cir. 2000) (noting reversal under abuse of discretion standard is possible only “when the appellate court is convinced firmly that the reviewed decision lies beyond the pale of reasonable justification under the circumstances”).
It would not be an opportunity for the jury to comprehensively re-balance the case for and against funding the grant, or merely substitute its judgment for that of the grantmaker. Perhaps there would be defined circumstances in which veto juries would work with a less deferential standard of review, but it ordinarily should not take much time to determine that there was no abuse of discretion.
Why’s that bad?
Is it really that hard to think of reasons why a faster process may be better, ceteris paribus, than a slower process?
The “ceteris paribus” is the key part here, and I think in real life fast processes for deciding on huge sums of money tend to do much worse than slower ones.
If someone says that A is worse than B because it has a certain property C, you shouldn’t ask “Why is C bad?” if you are not disputing the badness of C. It would be much clearer to say, “I agree C is bad, but A has other properties that make it better than B on balance.”
Re: FTX—I’m not sure what would have “fixed FTX”, but I did think there were decisions here that made an impact. I don’t think EA is to blame for the criminal activities, but we did platform SBF a lot (and listened to him talk about things he had no idea about), and sort of had a personality cult around him (like we still do aroung Will, for example). You can see this from comments of people saying how disappointed they were and how much they had looked up to him.
So there are some cultural changes that would’ve had a chance of making it better—like putting less emphasis on individuals and more on communities; like leaving hard domain-knowledge questions to experts; like being wary of crypto, as a technology that involves a lot of immoral actions in practice from all big actors.
Re: Bostrom—Part of the reason that EA was so susceptible to backlash from his writings was that the movement made him a central figure despite his already fringe and alarming views, which IMO should have been a red flag even before we knew about the outright racism. And this, I think, is a result of lack of diversity in the movement, and even more so in decisionmaking circles.
It will take a while to break all of this down, but in the meantime, thank you so much for posting this. This level of introspection is much appreciated.
Most of the content here is amalgamated from winning entries in the EA criticism contest last year, and it rarely cites the original authors.
I strongly agree with this particular statement from the post, but have refrained stating it publicly before out of concern that it would reduce my access to EA funding and spaces.
I’ve been surprised how many researchers, grant-makers, and community organizers around me do seem to interchange these things. For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group “I rank [Researcher X] as an A-Tier researcher. I don’t actually know what they work on, but they just seem really smart.” I found this very epistemically concerning, but other people didn’t seem to.
I’d like to understand this reasoning better. Is there anyone who disagrees with the statement (aka, disagrees that these factors should be consciously separated) who could help me to understand their position?
I agree that it’s important to separate out all of these factors, but I think it’s totally reasonable for your assessment of some of these factors to update your assessment of others.
For example:
People who are “highly intelligent” are generally more suitable for projects/jobs/roles.
People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.
I agree that this feels somewhat concerning; I’m not sure it’s an example of people failing to consciously separate these things though. Here’s how I feel about this kind of thing:
It’s totally reasonable to be more optimistic about someone’s research because they seem smart (even if you don’t know anything about the research).
In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I’d never be that confident in someone’s research direction just based on them seeming really smart, even if they were famously smart. (E.g. Scott Aaronson is famously brilliant and when I talk to him it’s obvious to me that he knows way more theoretical computer science than I do, but I definitely wouldn’t feel optimistic about his alignment research directions without knowing more about the situation.)
I think there is some risk of falling into echo chambers where lots of people say really positive things about someone’s research without knowing anything about it. To prevent this, I think that when people are optimistic about someone’s research because the person seems smart rather than because they’ve specifically evaluated the research, I think they should clearly say “I’m provisionally optimistic here because the person seems smart, but fwiw I have not actually looked at the research”.
Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that’s a distinct concern from the one I quoted from the post.
I’ve personally observed this as well; I’m glad to hear that other people have also come to this conclusion.
I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn’t sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don’t compete in IMO and, outside of specific prestigious academic institutions, haven’t even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: “Doing better technical research is easy; we just have to recruit the IMO winners!”
To strengthen your point—as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I’ve had a professor in my math degree, also an IMO medalist, who disagreed), but I’m not convinced a lot of it transfers to any other kind of research.
Yeah, IMO medals definitely don’t suffice for me to think it’s extremely likely someone will be AFAICT good at doing research.
Thanks for posting this. I have lot’s of thoughts about lots of things, that will take longer to think about. So I start with one of the easier questions.
Regarding pear review, you suggest
Have you had any interaction with the academic pear review system? Have you seen some of the stuff that passes though pear review? I’m in favour of scientific rigour, but I don’t think pear review solves that. In reality, my impression is that academia relies as much as name recognition and informal consensus mechanisms, as we (the blogpost community) does. The only reason academia has higher standards (in some fields) is that these fields are older and have developed a consensus around what methods are good enough vs not good enough.
I think pear review have the potential to be good. My impression is that it does really work in math, and that this has a lot to do with that reviewers receive recognition. But in many other fields it mainly serves to slow down research publication, and to gatekeep papers that are not written in the right format, or are not sufficiently interesting.
I did a PhD in theoretical physics and I was not impressed by the pear review responses I got on my paper. It was almost always very shallow comments. Which is not suppressing given that at least in that corner of physics pear review was unpaid and unrecognised work.
Can one just do that? Isn’t it very hard to find a person who has the right expertise and who you can verify have the right expertise?
If you already did the work to make a high quality paper, then peer review probably won’t add much. But the point is actually to prevent poor quality, incorrect research from getting through, and to raise the quality of publications as a whole.
My PhD was on computational physics, and yeah, the peer review didn’t add much to my papers. But because I knew it was there, I made sure to put a ton of work to make sure it was error free and every part of it was high quality. If I knew I could get the same reward of publication by being sloppy or lazy, I might be tended to do that. I certainly put orders of magnitude more effort into my papers than I do to my blog posts.
I certainly don’t think peer review is perfect, or that every post should be peer reviewed or anything. But I think that research that has to pass that bar tends to be superior to work that doesn’t.
This comment is pretty long, but TLDR: peer review and academia have their own problems, some similar to EA, some not. Maybe a hybrid approach works, and maybe we should consult with people with expertise in social organisation of science.
To some extent I agree with this. Whilst I’ve been wanting more academic rigour in X-Risk for a while, peer review certainly is no perfect panacea, although I think it is probably better than the current culture of deferring to blog posts as much as we do.
I think you are right that traditional academia really has its problems, and name recognition is also still an issue (eg Nobel Prize winners are 70% more likely to get through peer review etc.). Nonetheless, certainly from the field (solar geoengineering) that I have been in, name recognition and agreement with the ‘thought leaders’ is definitely less incentivised than in EA.
One potential response is to think a balance between peer review, the current EA culture and commissioned reports is a good balance. We could set up an X-Risk journal with some editors and reviewers who are a) dedicated to pluralism and b) will publish things that are methodologically sound irrespective of result. Alternatively, a sort of open peer review system where pre-prints are published publically, with reviewers comments and then responses to these as well. However, for major decisions we could rely on reports written and invetigated by a number of people. Open Phil have done this to an extent, but having broader panels etc. to do these reports may be much more useful. Certainly its something to try.
I do think its really difficult though, but I think the current EA status quo is not working. Perhaps EA consulting with some thinkers on the social organisation of science to better design how we do these things may be good, as there certainly are people with this expertise.
And it is definitely possible to commission structured expert elicitations, and is possible to directly fund specific bits of research.
Moreover, another thing about peer review is that it can sometimes be pretty important for policy. This is certainly the case for the climate change space, where you won’t be incorporated into UN decision making and IPCC reports unless your peer reviewed.
Finally, I think your points about the ‘agreed upon methods’ sort of thing is really good, and this is something I’m trying to work on in XRisk. I talk about this a little in my ‘Beyond Simple Existential Risk’ talk, and am writing a paper with Anders Sandberg, SJ Beard and Adrian Currie on this at present. I’d be keen to hear your thoughts on this if your interested!
Rethink Priorities occasionally pays non-EA subject matter experts (usually academics that are formally recognized by other academics as the relevant and authoritative subject-matter experts, but not always) to review some of our work. I think this is a good way of creating a peer review process without having to publish formally in journals. Though Rethink Priorities also occasionally publishes formally in journals.
Maybe more orgs should try to do that? (I think Open Phil and GiveWell do this as well.)
Since you liked that though let me think out loud a bit more.
I think it’s practically impossible to be rigorous without a paradigm.
Old sciences have paradigms and mostly work well but the culture is not nice to people trying to form ideas outside the paradigm, because that is necessarily less rigours. I remember some academic complaining on this on a podcast. They where doing some different approach within cognitive science and had problem with pear review because they where not enough focused on measuring the standard things.
On the other had there is EA/LW style AI Safety research, where everyone talks abut how preparadigmatic we are. Vague speculative ideas, with out inferential depth, get more appreciation and attention. By now there are a few paradigms, the clearest case being Vanessas research, which almost no one understand. I think part of the reason her work is hard to undertand is exactly because it is rigours within a paradigm research. It’s specific proof with in a specific framework. It has both more details and more prerequisites. While reading pre paradigmatic blogposts is like reading the first intro chapter in a text book (which is always less technical), the with in paradigmatic stuff is more like reading chapter 11, and you really have to have read the previous chapters, which makes it less accessible. Especially since no one collected the previous chapters for you, and the person writing it is not selected for their pedagogical skills.
Research has to start as pre paradigmatic. But I think that the dynamic described above makes it hard to move on, to pick some paradigm to explore and start working out the details. Maybe a field at some point needs to develop a culture of looking down at less rigours work, for any rigours work to really take hold? I’m really not sure. And I don’t want to loose the explorative part of EA/LW style AI Safety research either. Possibly rigour will just develop naturally over time?
End of speculation
I think this is pretty interesting and thanks for sharing your thoughys! There’s things here I agree with, things I disagree with, and I might say more when I’m on my computer not phone!. However, I’d love to call about this to talk more, and see
Is there a recording?
I’m always happy to offer my opinions.
here’s my email: linda.linsefors@gmail.com
There is, it should be on the cea youtube channel at some point. It is also a forum post:https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex#:~:text=It sees the future as,perhaps at least as important.
FWIW I don’t think it’s usually very hard to find people with the right expertise. You’d just need to look for author names on a peer reviewed paper / look at who they cite / look at a university’s website / email a prof and ask them to refer you to an expert.
I liked this post, but I’m interested in hearing from someone who disagrees with the authors: Do you think it would be a bad idea to try these ideas, period? Or do you just object to overhauling existing EA institutions?
There were a few points at which the authors said “EA should...” and I’m wondering if it would be productive to replace that with “We, the authors of this post, are planning to… (and we want your feedback on our plans)”
I suppose the place to start would be with some sort of giving circle that operates according to the decisionmaking processes the authors advocate. I think this could generate a friendly and productive rivalry with EA Funds.
Implementing a lot of these ideas comes down to funding. The suggestions are either about distributing money (in which case you need money to hand out) or about things that will take a lot of work, in which case someone needs to be paid a salary.
I also noticed that one of the suggestions was to get funding from outside EA. I have no idea how to fundraise. But anyone who know how to fundraise, can just do that, and then use that money to start working down the list.
I don’t think any suggestion to democratise OpenPhil’s money will have any traction.
I think this hits the nail on the head. Funding is the issue, it always is.
One thing I’ve been thinking about recently is maybe we should break up OpenPhil, particularly the XRisk side (as they are basically the sole XRisk funder) . This is not because I think OpenPhil is not great (afaik they are one of the best philanthropic funds out there), but because having essentially a single funder dictate everything that gets funded in a field isn’t good, whether that funder is good or not. I wouldn’t trust myself to run such a funding boday either.
What would this mean exactly? I assume OpenPhil have already splitt up different types of funding between different teams of people. So what would it mean in practice to split up OpenPhil itself?
Making it into two legal entities? I don’t think the number of legal entitets matters.
Moving the teams working on different problems to different offices?
So OpenPhil is split into different teams, but I’ll focus specifically on their grants in XRisk/Longtermism. OpenPhil, either directly or indirectly, are essentially the only major funder of XRisk. Most other funders essentially follow OpenPhil. Even though I think they are very competent, the fact the field has one monolithic funder isn’t great for diversity and creativity; certainly I’ve heard a philosopher of science describe xrisk as one of the most hierarchical fields they have seen, a lot due to this. OpenPhil/Dustin Moskovitz have assets. They could break up into a number of legal entities with their own assets, some overlapping on cause area (eg 2 or 3 xrisk funders). You would want them to be culturally different; work from different offices, have people with different approaches to xrisk etc. This could really help reduce the hierarchy and lack of creativity in this field. Some other funding ideas/structures are discussed here https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278
Yep, that’s why I suggested starting with a giving circle :-)
Lots of people upvoted this post. Presumably some of them would be interested in joining.
My guess would be that if the authors start a giving circle and it acquires a strong reputation within the community for giving good grants, OpenPhil/Dustin Moskovitz will become interested.
Along the same lines: The authors recommend giving every user equal voting weight on the EA Forum. There is a subreddit for Effective Altruism which has this property. I’ll bet some of the authors of this post could become mods there if they wanted. Also, people could make posts on the subreddit and cross-post them here.
I agree that I would be massively more in favour of basically all of these proposals of they were proposed to be tried in parallel with, rather than instead of/”fixing”, current EA approaches. Even the worst of them I’d very much welcome seeing tried.
Thanks for the time you’ve put into trying to improve EA, and it’s unfortunate that you feel the need to do so anonymously!
Below are some reactions, focused on points that you highlighted to me over email as sections you’d particularly appreciate my thoughts on.
On anonymity—as a funder, we need to make judgments about potential grantees, but want to do so in a way that doesn’t create perverse incentives. This section of an old Forum post summarizes how I try to reconcile these goals, and how I encourage others to. When evaluating potential grantees, we try to focus on what they’ve accomplished and what they’re proposing, without penalizing them for holding beliefs we don’t agree with.
I understand that it’s hard to trust someone to operate this way and not hold your beliefs against you; generally, if one wants to do work that’s only a fit for one source of funds (even if those funds run through a variety of mechanisms!), I’m (regretfully) sympathetic to feeling like the situation is quite fragile and calls for a lot of carefulness.
That said, for whatever it’s worth, I believe this sort of thing shouldn’t be a major concern w/r/t Open Philanthropy funding; “lack of output or proposals that fit our goals” seems like a much more likely reason not to be funded than “expressed opinions we disagree with.”
On conflicts of interest: with a relatively small number of people interested in EA overall, it doesn’t feel particularly surprising to me that there are a relatively small number of particularly prominent folks who are or have been involved in multiple of the top organizations. More specifically:
Since Open Philanthropy funds a large % of the orgs focused on our priority issues, it doesn’t seem surprising or concerning that many of the people who’ve spent some time working for Open Philanthropy have also spent some time working for Open Philanthropy grantees. I think it is generally common for funders to hire people who previously worked at their grantees, and in turn for ex-employees of funders to leave for jobs at grantees.
It doesn’t seem surprising or concerning that people who have written prominent books on EA-connected ideas have also helped build community infrastructure organizations such as the Centre for Effective Altruism.
To be clear, I think it’s important for conflicts of interest to be disclosed and handled appropriately, and there are some conflicts of interest that concern me for sure—I don’t at all mean to minimize the importance of conflicts of interest or potential concerns around them. I still thought it was worth sharing those reactions to the specific takes given in that section of the post.
On our concentration on a couple of existential risks: here I think we disagree. OP works on a wide variety of causes, but I don’t think we should be diversifying more than we are *within* existential risk given our picture of the size and neglectedness of the different risks.
On being in line with the interests of billionaires: I understand the misgivings that people have about EA being so reliant on a small number of funders, and address that point below. And I understand skepticism that funders who have made their wealth in the technology industry have only global impact in mind when they focus their philanthropy on technology issues. For what it’s worth, in the case of the particular billionaires I know best, Cari and Dustin were pretty emotionally reluctant to work on x-risks (and I was as well) - this felt at least to me like a case of them reluctantly concluding that these are important issues rather than coming in with pet causes.
On centralization of funding: I’m having trouble operationalizing the calls for less centralization of funding decision-making, which seems to be the main driver of much of your concerns. I agree that heavy concentration of funding for a given area brings some concerns that would be reduced if the same amount of funding were more spread out among funders; but I haven’t seen an alternative funding mechanism proposed that seems terribly promising.
I was broadly in sync with Dustin’s thoughts here, though not saying I’d endorse every word. I don’t see a good way to define the “members” of EA without keeping a lot of judgment/discretion over what counts (and thus keeping the concentration of power around), or eroding the line between EA and the broader world with its very different priorities. To me, it looks like EA is fundamentally a self-applied label for a bunch of individuals making decisions using an intellectual framework that’s both unusual and highly judgment-laden; I think there are good and bad things about this, but I haven’t seen a way to translate it into more systematic or democratic formal structures without losing those qualities.
I’m not confident here and don’t pretend to have thought it fully through. I remain interested in suggestions for approaches to spending Cari’s and Dustin’s capital that could improve how it’s spent—the more specific and mechanistic, the better.
What’s best for spending Cari and Dustin’s financial capital may not be what’s best for the human community made up of EAs. One could even argue that the human capital in the EA community is roughly on par with or even exceeds the value of Good Ventures’ capital. Just something to think about.
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?
I’m wondering about any overlap between your concerns and the OP’s.
I’d be glad for an answer or just a link to something written, if you have time.
Hi Holden, thanks for writing this up, but would it be possible for you to say something with a little bit more substance? At present it seems rather perfunctory and potentially a little insulting.
I’ve attempted to translate the comment above into a series of plain-English bottom-lines.
I apologise if the tone is a little forthright: a trade-off with clarity and intellectual honesty.
On anonymity
“Yeah I can see why there might seem to be a problem, and I promise that I am truly very sorry that you’re facing its consequences. In any case, I promise that everything is actually completely fine and you don’t need to worry! I acknowledge that (as you have already said) my promises don’t count for much here, but… trust me anyway! No, I will not take any notice of the specific issues you describe, nor the specific solutions that you propose.”
On conflicts of interest
“Here I will briefly describe some of the original causes of the problem. I personally think that it’s no big deal, and will not engage with any of the arguments or examples you provide. I promise we’re taking it really seriously, though.”
On focusing on a couple of existential risks (which is a gross simplification of the section I presume you’re responding to?)
“I personally think everything is fine, no I will not engage with any of the arguments or examples you provide.”
On being in line with the interests of billionaires
“I understand your concerns, but most of our tech billionaire donors changed their minds to fit the techno-political culture of Silicon Valley rather than starting off that way, and thus all incentive structures and cultural factors are completely irrelevant.”
On centralization of funding
“I perfunctorily agree that there is a problem, but I’m having trouble operationalizing the operational proposals you made. I will provide no specifics. I think membership-demarcation may be a problem, and will ignore your proposals for solving it.”
“By the way, would you mind doing even more unpaid work to flesh out specific mechanistic proposals, even though I, the person with the power to implement such proposals, just completely ignored them all in the sections I responded to?”
Despite my pre-existing intellectual respect for you, Holden, I really can’t escape reading this as a somewhat-more-socially-competent version of Buck’s response:
“We bosses know what we’re doing, you’re welcome to disagree if you want, but if you want to be listened to you need to do a bunch of unpaid work that we will probably completely ignore, and we most likely won’t listen to you at all anyway.”
This is what power does to your brain: you are only able to countenance posting empty EA-ified PR-speak like this because you are accountable only to a few personal friends that basically agree with you, and can thus get away with more or less ignoring external inputs.
Writing like this really reminds me of the bit about Interpretive Labour in Dead Zones of the Imagination:
Overwhelmingly one-sided social arrangements breed stupidity: by being in a position where you’re powerful enough to ignore people with other points of view, you become extremely bad at understanding them.
Thus, the oblivious bosses (egged on by mixed teams of true sycophants and power/money-seeking yes-men) continue doing whatever they want to do, and an invisible army of exhausted, exasperated, and powerless subordinates scramble to semi-successfully translate the whims of the bosses into bureaucratic justifications for doing the things that actually need to be done.
The bosses can always cook up some justification for why them being in charge is always the best way forward, and either never hear critiques because critics fear for their careers or, as seen here, lazily dismiss them without consequence.
Speaking as someone with a little experience in similar organisations and movements to this one that slowly lost their principles as they calcified into self-serving bureaucracies:
This is what it looks like.
We have warned A.C. Skraeling before about their behavior. “I will rephrase your statement as (insulting thing the person clearly didn’t say)” violates our norms. We are therefore issuing them a one-month ban.
Looping back some months later, FWIW while I disagree with most of the rest of the comment (and can see a case for a ban as a result of them), I quite appreciate the point about “interpretive labor”, and I’ve found it an interesting/useful conceptual handle in my toolkit since reading it.
(This is a high bar as most EA Forum comments do not update me nearly as much).
This post has convinced me to stay in the EA community. If I could give all the votes I have given to my own writings to this post, I would. Many of the things in this post I’ve been saying for a long time (and have been downvoted for) so I’m happy to see that this post has at least a somewhat positive reaction.
To add to what this post outlines. While the social sciences are often ignored in the EA community one notable exception to that is (orthodox) economics. I find it ironic that one of the few fields where EA’s are willing to look outside their own insular culture, is in itself extremely insular. Other social studies like philosophy, political science, history, sociology, gender studies all make a lot of attempts to integrate themselves with all the other social sciences. This makes it so that learning about one discipline also teaches you about the other disciplines. Economists meanwhile have a tendency to see their discipline as better than the others starting papers with things like:
In the paper “The Superiority of Economists” by Fourcade et al, economists were found to be the only group that thought interdisciplinary research was worse than research from a singular field. Furthermore they looked at top papers from political science, economics and sociology. They found that political science and sociology cited economics papers many times more than the other way around:
This lack of citing other social sciences was later confirmed by Angrist et al:
Given the complex interdisciplinary nature of societal issues, studying the basics of economics might make you overconfident that you can solve societal problems.
Take for example supply and demand. The standard supply and demand model will tell you that having/increasing the minimum wage will automatically increase unemployment. But if we look at actual empirical evidence it shows us that it doesn’t. Learning the basics of economics might mislead people about which policies will actually help people and a more holistic look at the social sciences as a whole may counter that.
By focussing our recruitment on only a couple disciplines we inherit the problems of those disciplines. It is no wonder then that just like the EA community, economics is also very homogeneous. Bayer and Rouse show us that women are given less degrees in economics than other disciplines:
And get only 13.7% of the authorship of economics papers. Women are also less likely to get tenure in their first academic job compared to men and face a lot of discrimination in general. An AEA survey found that half of women say they were treated unfairly because of their sex and almost half say they’ve avoided conferences/seminars because of fear of harassment.
One of the only other social studies that is popular in EA is philosophy, which has similar issues with underrepresentation and discrimination.
For the longest time the EA topics page had a segment with ‘key figures’. This is obviously bad if you want to combat ‘hero worship’. My memory might be off but I seem to remember that the only social studies that were present were philosophy and economics, and practically everyone on that page was a man. [EDIT: Found an image, my memory was pretty close and in fact that page still exists! AND SBF WAS STILL ON IT!]
Even now we have a tag for philosophy and economics, but not for sociology, anthropology, political science, gender studies or many other social studies.
These are also the two social studies that the rationalists are the most interested in. The accusation of ‘bad epistemics’ seems to coincide a lot with ‘non-rationalist epistemic’. I’ve wanted to push back on the dominant rationalists framework for quite a while now, and one post in particular I wanted to write is an attack on scientific-realism and a defense of social constructivism. I never did it because I feared it would get downvoted, nominally because it’s not directly ‘effective altruism’. But the hidden premises in how we research, how we categorize the world and what gets to count as ‘genuine science’ have a huge effect on what gets to count as ‘effective altruism’.
Personally I think this problem boils down to how effective altruism represents itself, and how it is actually governed.
For instance in my own case, I became interested in effective altruism and started getting more involved in the space with the idea that it was a loose collection of aligned, intelligent people who want to take on the world’s most pressing problems. Over time, I’ve realize that like the post mentions, effective altruism is in fact quite hierarchical and not disposed to giving people a voice based solely on the amount of time and effort they put in the movement.
Admittedly, this is a pretty naive view to take when going into any social movement.
While I am sympathetic to the arguments that a small tightly knit group can get things done more quickly and more efficiently, there is a bit of a motte and bailey going on between the leadership of effective altruism and the recruiting efforts. From my perspective a lot of new folks that join the movement are implicitly sold on a dream, commit time and energy, then are not given the voice they feel they’ve earned.
Whether or not a more democratic system would be more effective, I still think many of the internal problems that have been surfacing recently would be fixed with better communication within effective altruism about how we make decisions and who has influence.
I can see why investing time and effort and then not receiving as much influence as you would like could be frustrating. At the same time, I guess I’ve always taken it for granted that I would need to be persuasive too. And sometimes I write things and people like it; other times I write things and people really don’t. I sometimes feel that my ideas don’t get as much attention as they should, but I imagine that most people think their ideas are great as well, so guess I accept that if I had a biased view of how good my ideas are, it wouldn’t necessarily feel that way. So I guess I’m suggesting that it might make sense to temper your expectations somewhat.
I definitely think that we should experiment with more ways of ensuring that the best ideas float to the top. I really appreciate the recent red-teaming competition and cause exploration competition; I think the AI worldviews competition is great as well. Obviously, these aren’t perfect, but we’re doing better than we were before and I expect we’ll do better as we iterate on these.
> I guess I’ve always taken it for granted that I would need to be persuasive too
I don’t mind having to be persuasive, my problem is that EA leadership is not available or open to hearing arguments. It doesn’t matter how persuasive one is if you can’t get into EAG, or break into the narrow social circles that the high-powered EAs hang out in.
Looking at Buck’s comment above, he makes it clear that the leadership doesn’t take EA forum comments or arguments here seriously, which is fair as they are busy.
I think we need better mechanisms to surface criticisms to decision makers overall.
I’m not sure I agree with this? I think “EA leadership” probably isn’t that open to arguments from unknown people. But if you show up and say sensible things you pretty quickly get listened to. I think that’s about as good as we can hope for: we can’t expect busy people to listen to every random piece of input; and it’s not that unreasonable to expect people to show up and do some good work before they get listened to.
I didn’t quite read him as saying that he didn’t take forum posts seriously, just that it wasn’t really written to engage people who disagreed with these ideas.
But we definitely figure out if there’s any other better mechanisms of floating ideas to the top.
My take on Buck’s comment is that he didn’t update from this post because it’s too high level and doesn’t actually argue for most of its object level proposals. I have a similar reaction to Buck where I evaluate a lot of the proposals to be pretty bad, and since they haven’t argued much for them and I don’t feel much like arguing against them.
I think Buck was pretty helpful in saying (what I interpret to mean) “I would be able to reply more if you argued for object level suggestions and engaged more deeply with the suggestions you’re bringing”
EA seems to have a bit of a “not invented here” problem, of not taking onboard tried and tested mechanisms from other areas. E.g. with the boring standard conflict of interest and transparency mechanisms that are used by charitable organisations in developed countries.
Part of this seems to come from only accepting ideas framed in certain ways, and fitting cultural norms of existing members. (To frame it flippantly, if you proposed a decentralised blockchain based system for judging the competence of EA leaders you’d get lots of interest, but not if you suggested appointing non-EA external people to audit.)
There might be some value to posts taking existing good practices in other domains and presenting them in ways that are more palatable to the EA audience, though ideally you wouldn’t need to.
I agree but I think this is hard problem for everyone right. I don’t know that any community can just fix it.
Background
First, I want to say that I really like seeing criticism that’s well organized and presented like this. It’s often not fun to be criticized, but the much scarier thing is for no one to care in the first place.
This post was clearly a great deal of work, and I’m happy to see so many points organized and cited.
I obviously feel pretty bad about this situation where, several people all felt like they had to do this in secret in order to feel safe. I think tensions around these issues feel much more heated than I’d like them to. Most of the specific points and proposals seem like things that in a slightly different world, all sides could feel much more chill discussing.
I’m personally in a weird position, where I don’t feel like one of the main EAs who make decisions (outside of maybe RP), but I’ve been around for a while and know some of them. I did some grantmaking, and now am working on an org that tries to help figure out how to improve community epistemics (QURI).
Some Quick Impressions
I think one big division I see in discussions like this, is that between:
What’s in the best interest of EA leadership/funding, conditional on them not dramatically changing their beliefs about key things (this might be very unlikely).
What’s an ~irreconcilable different opinion (with a reasonable time of debate/investigation, say, a few days of solid reading).
Bucket 1 is more about convincing and informing each other. The way to make progress there is by deeply understanding those with power, and explaining how it helps their goals.
Bucket 2 is more about relative power. No two people are perfectly aligned, even after years of deliberation. Frustratingly, the main ways to make progress here are to either move it from some players to others, or doing things like just making power moves (taking actions that help your interests, in comparison to other stakeholders).
Right now, in EA, the vast majority of funding (and thus control) ultimately comes from one source. This is a really uncomfortable position, in many ways.
However, other members of the community clearly have some power. They could do some nice things like write friendly posts, or some not so nice things (think of strikes) like leaking information or complaining/ranting to antagonistic journalists.
I imagine that eventually we could find better ways to do group bargaining, like some sort of voting system (similar to what you recommend).
Back to this post, I think that some of the way this post is written reminds me of “lists of demands” that I’m used to seeing in fairly antagonistic negotiations, in the style of Bucket 2.
My guess is that this wasn’t your intention. Given that it’s so long (and must have involved a lot of coordination to write), I could definitely sympathize with “let’s just get it out there” instead of making sure it’s style is optimized for Bucket 1 (if that was your intention). That said, If I were a grantmaker now, I could easily see myself putting this in my “some PR fire to deal with” bucket rather than “some useful information for me to eventually spend time with”.
Do you think that group bargaining/voting in EA would be a good thing for funding/prioritization?
I personally like the current approach that has individual EAs and orgs make their own decisions on what is the best thing to do in the world.
For example, I would be unlikely to fund an organization that the majority of EAs in a vote believed should be funded, but I personally believed to be net harmful. Although if this situation were to occur, I would try to have some conversations about where the wild disagreement was stemming from.
I think there’s probably a bunch of different ways to incorporate voting. Many would be bad, some good.
Some types of things I could see being interesting:
Many EAs vote on “Community delegates” that have certain privileges around EA community decisions.
There could be certain funding groups that incorporate voting, roughly in proportion to the amounts donated. This would probably need some inside group to clear funding targets (making sure they don’t have any confidential baggage/risks) before getting proposed.
EAs vote directly on new potential EA Forum features / changes.
We focus more on community polling, and EA leaders pay attention to these. This is very soft, but could still be useful.
EAs vote on questions for EA leaders to answer, in yearly/regular events.
I’d be interested to see some of those tried for sure!
I imagine you’d also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it’s not super clear any are the best option to pursue relative to other goals right now.
Of course. Very few proposals I come up with are a good idea for myself, let alone others, to really pursue.
I think I’m probably sympathetic to your claims in “EA is open to some kinds of critique, but not to others”, but I think it would be helpful for there to be some discussion around Scott Alexander’s post on EA criticism. In it, he argued that “EA is open to some kinds of critique, but not to others” was an inevitable “narrative beat”, and that “shallow” criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.
I was primed to dismiss your claims on the basis of Scott Alexander’s arguments, but on closer consideration I suspect that might be too quick.
I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it’s easier to triangulate what’s really meant when there are more examples.
I also remember Scott’s post and already when reading it it though that the next narrative beat argument was bad.
The reason why it is the next narrative beat is because it is almost always true.
If I say that the sun will rise tomorrow, and you respond, “but you expect that the sun will raise every day, you have to give specific argument for this day in particular”, that don’t make sense.
I think it’s more or less true that “EA is open to some kinds of critique, but not to others”, but I don’t think the two categories exactly lines up with deep vs shallow critique.
My current model is that powerful EAs are mostly not open to critique at all, but only pretend to welcome it for PR reasons, but mainly ignores it. As long as your critique is polite enough everyone involved will pretend to appreciate it, but if you cross the line to hurting anyone’s feeling (which is individual and hard to predict) then there will be social and professional consequences.
My model might be completely wrong. It’s hard to know given the opaqueness around EA power. I offered critique and there is never any dialogue or noticeable effect.
My own observation has been that people are open to intellectual discussion (your discounting formula is off for x reasons) but not to more concrete practical criticism, or criticism that talks about specific individuals.
That was also Scott Alexander’s point if I understood it correctly.
I don’t think that is correct because of the orthodoxy changing due to powerful EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, and towards more policy.
I think he’s arguing that you should have a little “fire alarm” in your head for when you’re regurgitating a narrative. Even if it’s 95% correct, that act of regurgitation is a time when you’re thinking less critically and it’s a perfect opportunity for error to slip through. Catching those errors has sufficiently high value that it’s worth taking the time to stop and assess, even if 19 out of 20 times you decide your first thought was correct.
As I and another said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk.
I think one of the reasons I loved this post is that my experience of reading it echoed in an odd way my own personal journey within EA. I remember thinking even at the start of EA there was a lack of diversity and a struggle at accept “deep critiques”. Mostly this did not affect me – until I moved into an EA longtermist role a few years ago. Finding existing longtermist research to be lacking for the kind of work I was doing I turned to the existent disciplines on risk (risk management, deep uncertainty, futures tool, etc). Next thing I know a disproportionately large amount of my time seemed to be being sunk into trying and failing to get EA thinkers and funders take seriously governance issues and those aforementioned risk disciplines. Ultimately I gave up and ended up partly switching away from that kind of work. Yet despite all this I still find the EA community to be the best place for helping me mend the world.
I loved your post but I want to push back on one thing – these problems are not only in the longermist side of EA. Yes neartermist EA is epistemically healthier (or at minimum currently having less scandals), but that there are still problems and we should still be self reflective and looking to learn from posts like this and to consider if there are issues around: diversity of views, limited funding to high impact areas due to over centralisation, rejection of deep critiques, bad actors, and so on. As one example consider the (extremely laudable) criticism contents from GiveWell which was focused heavily on looking at how their quantitative analyses are 10% inaccurate, but not finding ways to highlight where their approach might be fundamentally failing to make good decisions. [section edited]
PS. One extra idea for the idea list: run CEA (or other EA orgs) on a cooperative model where every donor/member gets a vote on key issues or leadership decisions.
Thanks for this thoughtful and excellently written post. I agree with the large majority of what you had to say, especially regarding collective vs. individual epistemics (and more generally on the importance of good institutions vs. individual behavior), as well as concerns about insularity, conflicts of interest, and underrating expertise and overrating “value alignment”. I have similarly been concerned about these issues for a long time, but especially concerned over the past year.
I am personally fairly disappointed by the extent to which many commenters seem to be dismissing the claims or disagreeing with them in broad strokes, as they generally seem true and important to me. I would value the opportunity to convince anyone in a position of authority in EA that these critiques are both correct and critical to address. I don’t read this forum often (was linked to this thread by a friend), but feel free to e-mail me (jacob.steinhardt@gmail.com) if you’re in this position and want to chat.
Also, to the anonymous authors, if there is some way I can support you please feel free to reach out (also via e-mail). I promise to preserve your anonymity.
Without defending all of the comments, I think some amount of “disagreeing . . . in broad strokes” is an inevitable consequence of publishing all of this at once. The post was the careful work of ten people over an extended period of time (most was written pre-FTX collapse). For individuals seeking to write something timely in response, broad strokes are unavoidable if one wants to address key themes instead of just one or two specific subsections.
I hope that, when ConcernedEAs re-post this in smaller chunks, there will be more specific responses from the community in at least some places.
Another option I like is to have the wiki include pages on “feminism”, “psychology” etc with summaries written in EA language of things people have found most valuable. I would read those.
You argue that funding is centralised much more than it appears. I find myself learning that this is the case more and more over time.
I suspect it probably is good to decentralise to some degree, however there is a very real downside to this:
some projects are dangerous and probably shouldn’t happen
the most dangerous of those are ones run by a charismatic leader and appear very good
if we have multiple funders who are not “informally centralised” (i.e. talking to each other) then there’s a risk that dangerous projects will have multiple bites at the cherry, and with enough different funders, someone will fund them
I appreciate that there are counters to this, and I’m not saying this is a slam-dunk argument against decentralisation.
One thing I think is decentralised funding will also make things like the FLI affair probably more likely. On the other hand, if this is happening already, and there are systematic biases anyway, and there is reduction in creativity, its a risk I’m willing to take. Lottery funding and breaking up funders into a few more bodies (eg 5-10 rather than the same roughly 2 or so?) Is what I’m most excited for, as they seem to reduce some of the risk whilst keeping a lot of the benefits
As always, I’d say we should view things on a spectrum, and criticism of centralisation should be viewed as advocacy for less centralisation rather than rejecting centralisation entirely.
It seems that the weight of that downside would vary significantly by cause area.
I think this is a real problem, and I think the solution is more open discussion. Encourage people to publicise what projects they plan to do, and let anyone critique it in an open discussion. This will catch more problem, and help improve projects.
Over centralised funding had too many bad side effects. It’s not worth it.
I appreciate the extent of thoughtful consideration that has been put into this post. I looked at the list of proposed reforms to consider which ones I should implement in my (much smaller) organisation.
I currently find it very difficult to weigh the benefits and costs of this entire list. I understand that a post shouldn’t be expected to do everything at once. But I would really appreciate it if someone explained which of these specific policies are standard practices in other contexts like universities/political parties/other NGOs.
Small point:
> Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.
I think this makes a false dilemma, and recommends what seems like an unusual standard that other posts probably don’t have.
“believe it to have made a useful contribution to the conversation” → This seems like arguably a really low bar to me. I think that many posts, even bad ones, did something useful to the conversation.
“whether they agree with all of our critiques.” → I never agree with all of basically any post.
I think that more fair standards of voting would be things more like:
”Do I generally agree with these arguments?”
″Do I think that this post, as a whole, is something I want community members to pay attention to, relative to other posts?”
Sadly we don’t yet have separate “vote vs. agreement” markers for posts, but I think those would be really useful here.
This is my first time seeing the “climate change and longtermism” report at that last link. Before having read it, I imagined the point of having a non-expert “value-aligned” longtermist applying their framework to climate change would be things like
a focus on the long-run effects of climate change
a focus on catastrophic scenarios that may be very unlikely but difficult to model or quantify
Instead, the report spends a lot of time on
recapitulation of consensus modeling (to be clear, this is a good thing that’s surprisingly hard to come by), which mainly goes out to 2100
plausible reasons models may be biased towards negative outcomes, particularly in the most likely scenarios
The two are interwoven, which weakens the report even as a critical literature review. When it comes to particular avenues for catastrophe, the analysis is often perfunctory and dismissive. It comes off less as a longtermist perspective on climate change than as having an insider evaluate the literature because only “we” can be trusted to reason well.
I don’t know how canonical that report has become. The reception in the thread where it was posted looks pretty critical, and I don’t mean to pile on. I’m commenting because this post links the report in a way that looks like a backhanded swipe, so once I read it myself I felt it was worth sketching out my reaction a bit further.
Context: I’ve worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.
Views my own.
I agree that the heavy use of a poorly defined concept of “value alignment” has some major costs.
I’ve been moderately on the receiving end of this one. I think it’s due to some combination of:
I take Nietzsche seriously (as Derek Parfit did).
I have a strong intellectual immune system. This means it took me several years to get enthusiastically on board with utilitarianism, longtermism and AI safety as focus areas. There’s quite some variance on the speed with which key figures decide to take an argument at face value and deeply integrate it into their decision-making. I think variance on this dimension is good—as in any complex ecosystem, pace layers are important.
I insisted on working mostly remotely.
I’ve made a big effort to maintain an “FU money” relationship to EA community, including a mostly non-EA friendship group.
I am more interested in “deep” criticism of EA than some of my peers. E.g. I tweet about Peter Thiel on death with dignity, Nietzsche on EA, and I think Derek Parfit made valuable contributions but was not one of the Greats.
Some of my object-level views have been quite different to those of my peers over the years. E.g. I’ve had reservations about the maximisation meme ever since I got involved, along the lines of those recently expressed by Holden.
I mostly quit 80,000 Hours in autumn 2015, mainly due to concerns about strategy, messaging and fit with my colleagues. I took on a 90% time role again in 2017, for roughly 4 years.
I have some beliefs and traits that some people find suggestive of a lack of moral seriousness (e.g. I’m into two-thirds utilitarianism; I don’t lose sleep over EA/LT concerns; I’m fairly normie by EA standards).
There are some advantages to the status quo, and I don’t have a positive proposal for improving this.
If someone at 80K or CEA wants to pay me to think about it for a day, I’d be up for that. Maybe I’ll do it anyway. I hesitate because I’m not sure how tractable this is, and I would not be surprised if, on further reflection, I came to think the status quo is roughly optimal at current margins.
Reminder for many people in this thread:
“Having a small clique of young white STEM grads creates tons of obvious blindspots and groupthink in EA, which is bad.”
is not the same belief as
“The STEM/techie/quantitative/utilitarian/Pareto’s-rule/Bayesian/”cold” cluster-of-approaches to EA, is bad.”
You can believe both. You can believe neither. You can believe just the first one. You can believe the second one. They’re not the same belief.
I think the first one is probably true, but the second one is probably false.
Thinking the first belief is true, is nowhere near strong enough evidence to think the second one is also true.
(I responded to… a couple similar ideas here.)
This post is much too long and we’re all going to have trouble following the comments.
It would be much better to split this up and post as a series. Maybe do that, and replace this post with links to the series?
Yup, we’re going to split it into a sequence (I think it should be mentioned in the preamble?)
Thanks for all the care and effort which went into writing this!
At the same time, while reading, my reactions were most of the time “this seems a bit confused”, “this likely won’t help” or “this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion”.
Unfortunately, to illustrate this in detail for the whole post would be a project for …multiple weeks.
At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the actual in-the-detail disagreement could look like.
I’ve decided to write a detailed response for a few paragraphs about rationality and Bayesianism. This is from my perspective not cherry-picked part of the of the original text which is particularly wrong, but a part which seems representatively wrong/confused. I picked it for convenience, because I can argue and reference this particularly easily.
This seems pretty strange characterization. Even though I participated at multiple CFAR events, do teach various ‘rationality techniques’ and know decent amount of stuff about Bayesian inference, I think this is misleading/confused.
What’s called in rationalist circles “Bayesian epistemology” is basically what’s the common understanding of the term:
- you don’t hold beliefs to be true or false, but have credences in them
- normatively, you should update the credences based on evidence. normatively, the proper rule for that is the Bayes rule; this is intractable in practice, so you do various types of approximations
- you should strive for coherent beliefs if you don’t want to be Dutch-booked
It’s important to understand that in this frame, it is a normative theory. Bayes theorem in this perspective is not some sort of “minor aid for doing some sort of likelihood calculation” but a formal foundation for large part of epistemology.
The view that you believe different things to different degrees, and these credences basically are Bayesian probabilities and are normatively governed by the same theory isn’t an ‘EA’ or ‘rationalist’ thing but a standard Bayesian take. (cf Probability Theory: The Logic of Science, E. T. Jaynes)
Part of what Eliezer’s approach to ‘applied rationality’ aimed for is taking Bayesian epistemology seriously and applying this frame to improve everyday reasoning.
But this is almost never done by converting your implicit probability distributions to numerical credences , doing the explicit numerical math, and blindly trusting the result!
What’s done instead is
-noticing that your brain already internally used credences and probabilities all the time. you can easily access your “internal” (/S1/...) probabilities in an intuitive way by asking yourself questions like “how surprised you would be to see [a pink car today| Donald Trump reelection | SpaceX rocket landing in your backyard]” (the idea that brains do this is pretty mainstream eg Confidence as Bayesian Probability: From Neural Origins to Behavior, Meyniel et.al.)
- noticing you brain often clearly does something a bit similar to what Bayesians suggests as the normative idea - e.g. if two SpaceX rockets already landed in your backyard today, you would be way less surprised by the third one
- noticing there is often a disconnect between this intuitive / internal / informal calculation, and the explicit, verbal reasoning (cf alief/belief)
- …and using all of that to improve both the “implicit” and “explicit” reasoning!
The actual ‘techniques’ derived from this are often implicit. For example, one actual technique is: Imagine you are an alien who landed in one world of two. They differ in the respect that in one a proposition is true, in the other, the opposite is true. You ask yourself how the world would look like in the different worlds, and then look at the actual world.
For example, consider this proposition: “democratic vote is the optimal way how to make decision making in organizations”: How would the world where this is true look like? There are parts of the world with intense competition between organizations, e.g. companies in highly competitive industries, optimizing hard and measurable things. In the world where the proposition is true, I’d expect a lot of voting in these companies. We don’t see that, which decreases my credence in the proposition.
It is relatively easy to see how this both connected to Bayes and not asking people to do any explicit odd multiplications.
While some people make the error of trying to replace complex implicit calculations by the over-simplified spreadsheets with explicit numbers, this paragraph seems to conflate multiple things together as “numerical” or “quantitative”.
Assuming fairly standard standard cognitive science and neuroscience, at some level all thinking is “numerical”, including thinking which feels intuitive or qualitative. People usually express such thinking in words like “I strongly feel” or “I’m pretty confident”
What’s a classical rationalist move in such cases is to try to make the implicit explicit. E.g., if you are fairly confident, at what odds would you be willing to bet on it?
When done correctly, the ability and willingness to do that mostly exposes what’s already there. People already act based on the implicit credences and likelihoods, even if they don’t try to express them as probability distributions, and you don’t have access to them.
E.g., when some famous experts recommended ‘herd immunity’ strategy to deal with covid, using strong and confident words, such recommendation actually were subjective “best guess” with little empirical basis. Same is actually true for many expert opinions on policy topics!
Rationalist habit of reporting credences and predictions using numbers … basically exposes many things to possibility of being proved wrong, and exposes many personal best guesses for what they are.
Yes, for someone who isn’t used to this at all this may create fake aura of ‘certainty’, because use of numbers often signals ‘this is more clear’ and use of words signals ‘this is more slippery’ in common communication. But this is just a communication protocol.
Yes, as I wrote before, some people may make the mistake of trying to convert some basic things to numbers and replace their brains with spreadsheets with Bayes formulas in the next step, but it does not seem common at least in my social neighborhood.
I would be curious how the authors imagine the non-Bayesian thinking not depending on any priors internally works.
This seems confused (the “common response” mentioned below applies here exactly). How do you imagine, for example, a group of people looking at a tree, manages to agree on seeing a tree? The process of converting raw sensory data to the tree-hypothesis is way more complicated than a typical careful and rigorous scientific study, and also way more reliable than a typical published scientific study.
Again: correctly understood, the applied rationalist idea is not to replace our mind’s natural ways of recognizing a tree by a process where you would assign numbers to statements like “green in upper left part of visual field” and do explicit Bayesian calculation in S2 way, but just to be …less wrong.
I think the “common response” is partially misunderstood here? The common response does not imply you can consciously explicitly multiply the large matrices or do the exact Bayesian inferences, any more than someone a catching a ball would be consciously and explicitly solving the equations of motion.
The correct ideas here are:
- you can often make some parts or results of the implicit subconscious calculations explicit and numeric (cf forecasting, betting, …)
- the implicit reasoning is often biased and influenced by wishes and wants
- explicitly stating things or betting on things sometimes exposes the problems
- explicit reasoning can be good for that
- explicit reasoning is also good for understanding what the normatively good move is in simple or idealized cases
- on the other hand, explicit reasoning alone is computationally underpowered for almost anything beyond very simple models. (compare how many FLOPs is your brain using, vs. how fast you can explicitly multiply numbers)
- what you usually need to do is use both, and watch for flaws
Personally I don’t know anyone who would propose people should do the “Individual Bayesian Thinking” mode of thoughts in the way you describe, and I don’t see much reason to make a study on this. Also while a lot people in EA orgs subscribe to basically Bayesian epistemology, I don’t know anyone who would try to live by the “IBT”, so you should probably be less worried about the risks from the use of it.
So, to me, this is characteristic—and, frankly, annoying—about the whole text. I don’t think you have properly engaged with Bayesian epistemology, state of art applied rationality practice, or relevant cognitive science. “critiqued on scientific grounds” sounds serious and authoritative … but where is the science?
This is both sad and funny. One of the good things about rationalist habits and techniques is, stating explicit numbers often allows one to spot and correct motivated reasoning. In relation to existential risk and similar domains, often the hope is that by practicing this in domains with good feedback and bets which are possible to empirically evaluate, you get better at thinking clearly… and this will at least partially generalize to epistemically more challenging domains.
Yes, you can overdo it, or do stupid or straw versions of this. Yes, it is not perfect.
But what’s the alternative? Honestly, in my view, in many areas of expertise the alternative is to state views, claims and predictions in sufficiently slippery and non-quantitative way that it is very difficult to clearly disprove them.
Take for example your text and claims about diversity. I think given the way you are using it, it seems anyone trying to refute the advise on empirical grounds would have really hard time, and you would be always able to write some story why some dimension of diversity is not important, or why some other piece of research states something else. (It seems a common occurrence in humanities that some confused ideas basically never die, unless they lose support on the level of ‘sociology of science’.)
Bottom line:
- these 8 paragraphs did not convince me about any mistake people at e.g. FHI may be making
- suggestion “Bayes’ theorem should be applied where it works” is pretty funny; I guess Bayesians wholeheartedly agree with this!
- suggestions like “studies of circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought” seems irrelevant given the lack of understanding of actual ‘rationality techniques’
- we have real world evidence that getting better at some of the traditional “rationalist” skills makes you better at least at some measurable things, e.g. forecasting
I suspect … that even what I see as a wrong model of ‘erros of EA’ maybe points to some interesting evidence. For example maybe some EA community builders are actually teaching “individual bayesian thinking” as a technique you should do in the way described?
I agree that we shouldn’t pretend to be particularly good at self-criticim. I don’t think we are. We are good at updating numbers, but I have giving criticism to orgs that’s not been acted on for years before someone saying it was a great idea. Honestly I’d have preferred if they just told me they didn’t want criticism than saying they did then ignoring it.
I think EA is better than most movements at self criticism and engaging with criticism.
I think many EAs mistake this for meaning that EA is “good” at engaging with criticism.
I think EA is still very bad at engaging with criticism, but other movements are just worse.
I’ll add that EAs seem particularly bad at steelmanning criticisms.
(eg—if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
By chance, can you suggest any communities that you think do a good job here?
I’m curious who we could learn from.
Or is it like, “EAs are bad, but so are most communities.” (This is my current guess at what I believe)
Good question.
The only other communities I know well are socialist + centre left political communities, who I think are worse than EA at engaging with criticism.
So I’d say EA is better than all communities that I know of at engaging with criticism, and is still pretty bad at it.
In terms of actionable suggestions, I’d say tone police a bit less, make sure you’re not making isolated demands for rigour, and make sure you’re steelmanning criticisms as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
Sorry yes, essentially “EAs are bad, but so are most communities.” But importantly we shouldn’t just settle for being bad, if we want to approximately do the most good possible, we should aim to be approximately perfect at things, not just better than others.
Thanks! I definitely agree that improvement would be really great.
If others reading this have suggestions of other community examples, that would also be appreciated!
It’s a shame that you feel you weren’t listened to.
However, in general I think we should be wary of concluding that the criticism was “ignored” just because people didn’t immediately do what you suggested.
If you ask for criticism, you’ll probably get dozens of bits of criticism. It’s not possible to act on all of the criticism, especially since some of this criticism is opposed to each other. Furthermore, it often takes time to process the criticism. For example, someone criticises you because they think the world is like X and you think the world is like Y. You think your model of the world is like Y, so you continue on that path, but over time you start to realise that the evidence is pointing more to the world being like X and eventually you update to that model. But maybe it would have taken longer if you hadn’t received that feedback earlier that someone thought the world did look like X and what to do if it did.
I think criticism is really complicated and multifaceted, and we have yet to develop nuanced takes on how it works and how to best use it. (I’ve been doing some thinking here).
I know that orgs do take some criticism/feedback very seriously (some get a lot of this!), and also get annoyed or ignore a lot of other criticism. (There’s a lot of bad stuff, and it’s hard to tell what has truth behind it).
One big challenge is that it’s pretty hard to do things. Like, it’s easy to suggest, “This org should do this neat project”, but orgs are often very limited in what things they could do at all, let alone what unusual things or things they aren’t already thinking about and good at they could do.
There’s definitely more learning to do here.
I find it very concerning/dissapointing that this post has so many downvotes. Like the authors said, don’t use upvotes/downvotes to indicate agreement/disagreement!
I strongly upvoted this post because I think it’s very valuable and I highly appreciate the months of effort put into it. Thanks for writing it! I don’t know whether I agree or not on most proposals, but I think they’re interesting nonetheless.
Personally I’m most skeptical of some of the democratization proposals. Like how would you decide who can vote? And I think it would drastically slow down grant making etc., making us less effective. Others have already better worded these concerns elsewhere.
In general I would love to see most of these ideas being tried out. Either incrementally so we can easily revert back, or as small experiments. Even with ideas I’m only say 30% sure they will work. If the ideas fail, then at least we’ll have that information.
I very much agree that we should rely more on other experts and try to reinvent the wheel less.
I had indeed never heard of the fields you mentioned before, which is sad.
I didn’t know your votes on the forum had more power the more karma you had! I’m really surprised. I was already wondering whether my strong votes had increased recently. I’m not sure I agree strong votes should be removed, I do think they’re really useful in expressing how much I appreciate a certain post. But the fact that your strong votes keep becoming more powerful the more karma you have is weird. I do think it makes sense to give more longtime/experienced EA’s more voting power to make sure the forum doesn’t become low-quality. But perhaps it can simply be that new users’ strong vote equals two votes, and the moment you get say 50-100 karma it equals 4-5 votes. But after that it doesn’t keep increasing.
I’m wondering how true the ‘heretics/critics don’t get funding’ is. Personally I aim to be honest/vocal about concerns I have, but since I don’t have highly valuable skills (according to the economic market) perhaps I should be more careful? I don’t want to and don’t feel like I should be more careful! And I hope I can trust EA leaders when they say they want more diversity of thought.
I didn’t downvote this post (nor did I upvote it), but I can understand why people might have downvoted. I can also understand people upvoting it given the clear effort involved.
I imagine that the people who downvoted it dislike proposals that don’t engage very strongly with the reasons why the proposals could actually be bad. I see this as reasonable, although I do expect most people have some bias here where they apply this expectation more strongly to posts they disagree with. I expect that I probably have this bias as well.
This reads more like a list of suggestions than an argument. You’re making twenty or thirty points, but as far as I’ve read (I admit I didn’t get thru the whole thing) not giving any one of them the level of argumentation that would unpack the issue for a skeptic. There’s a lot that I could dispute, but I feel like arguing things here is a waste of time; it’s just not a good format to discuss so many issues in a single comment section.
I will mention one thing:
It’s one thing to argue that organizations function more efficiently when they are more diverse, or to say that it’s disproportionately important that we recruit different types of people, but rattling off a list of attributes to make a negative stereotype with a dumb joke at their expense makes it extremely difficult to take you seriously. When you write something like this, it is not obvious that you are doing so with inoffensive intentions, and even if you don’t have offensive intentions the propagation of negative stereotypes may cause harm all the same. If you actually believe in tolerance and equality then you would do well to take five minutes out of your day to clarify that there is nothing wrong with the things on your list, in individual or in aggregate, and that you value such people equally within the EA movement.
Also, given the current and irreversible trend towards capitalization of Black, it is prudent to capitalize White as well.
I would appreciate a TL;DR of this article, and I am sure many others would too! It helps me to decide if it’s worth spending more time digging into the content.
It was even too long for chatGPT to summarize 🫠
I second this.
FWIW I read from the beginning through What actually is “value-alignment”? then decided it wasn’t worth reading further and just skimmed a few more points and the conclusion section. I then read some comments.
IMO the parts of the post I did read weren’t worth reading for me, and I doubt they’re worth reading for most other Forum users as well. (I strong-downvoted the post to reflect this, though I’m late to the party, so my vote probably won’t have the same effect on readership as it would have if I had voted on it 13 days ago).
Thank you so much for writing this, it feels like a breath of fresh air. There are a huge number of points in here that I strongly agree with. A lot of them I was thinking of writing up into full length posts eventually (I still might for a few), but I was unsure if anyone would even listen. This must have taken an immense amount of effort and even emotional stress, and I think you should be proud.
I think if EA ever does break out of its current mess, and fulfills it’s full potential as a broad, diverse movement, then posts like this are going to be the reason why.
(It feels important to disclaim before commenting that I’m not an EA, but am very interested in EA’s goal of doing good well.)
Thank you!! This post is a breath of fresh air and makes me feel hopeful about the movement (and I’m not just saying that because of the Chumbawamba break in the middle). In particular I appreciated the confirmation that I’m not the only person who has read some of the official EA writings regarding climate change and thought (paraphrasing here) ”?!!!!!!!!!”
I know this cannot have been trivial or relaxing to write. A huge thank you to the authors. I really hope that your suggestions are engaged with by the community with the respect that they deserve.
Bold type mine. No I think peer review is cumbersome and that to use it would slow work down a lot. Are there never mistakes in peer reviewed science? No, there are. I think we should aim to build better systems of peer review.
My take is different. As a working scientist/engineer in the hard sciences, I use peer-reviewed research when possible, but I temper that with phone calls to companies, emails to other labs, posts on ResearchGate, informal conversations with colleagues, and of course my own experiments, mechanistic models, and critical thinking skills. Peer-reviewed research is nearly always my starting point because it’s typically more information and data-rich and specific than blog posts, and because the information I need is more often contained within peer-reviewed research than in blog posts.
That said, there are a lot of issues and concerns raised when blog posts are too heavily a source (although here that’s very much the pot calling the kettle black, with most of the footnotes being the author’s own unsourced personal opinions). When people lean too heavily on blog posts, it may illustrate that they’re unfamiliar with the scientific literature relevant to the issue, and that they themselves have mostly learned about the information by consuming other blog posts. Also, a compelling post that’s full of blog post links (or worse, unsourced claims) gives the interested reader little opportunity to check the underpinnings of the argument or get connected with working scientists in the field.
I’m fine with using the medium of blog posts to convey an idea, or of citing blog posts in specific circumstances. Where a peer-reviewed source is available, I think it’s better to either use that, or to cite it and give the blog post as an accessible alternative.
The question isn’t “are there zero mistakes”, the question is, “is peer reviewed research generally of higher quality than blogposts?”. To which the answer is obviously yes (at least in my opinion), although the peer review process is cumbersome and slow, and so will have less output and cover less area.
When there are both peer reviewed research and blogposts on a subject matter, I think the peer-reviewed research will be of higher quality and more correct a vast majority of the time.
Compared to EA blog posts weighted by karma? The Answer is not obviously yes in my opinion. I think we’ll fare better in the replication crisis.
Upvotes on an internet forum are not a good replacement for peer review. I’m surprised I even have to argue for this, but here goes:
the vast majority of people upvoting/downvoting are not experts in the topic of the blog post.
The vast majority of upvoting/downvoting occurs before a blogpost has been thoroughly checked for accuracy. If theres a serious mistake in a blogpost, and it’s not caught right away, almost no-one will see it.
Upvoting/downvoting is mostly a response to the percieved effort of a post and on whether they personally agree with it.
Yes, peer review is flawed, but the response isn’t to revert to blogposts, it’s to build a better system.
And yet argue it you shall.
I think that peer review is so poor that probably just the forum alone produces work that is less in need of replication. I guess that’s not really about the system.
And yes, we should build a better system, but still. Peer review vs upvotes on journal sites, I would pick the latter.
Maybe we could discuss it in the comments of https://forum.effectivealtruism.org/topics/peer-review
Epistemic status: Hot take
To me, FTX hosting EA events and fellowships in the Bahamas reeked of neocolonialism (which is a word I don’t like to bandy about willy-nilly). 90.6% of The Bahamas’ population are Black,[1] whereas the majority of EAs are White.[2] Relative to many countries in the Americas, The Bahamas has a superficially strong economy with a GDP (PPP) per capita of $40,274, but a lot of this economic activity is due to tourism and offshore companies using it as a tax haven,[1] and it’s unclear to me how much of this prosperity actually trickles down to the majority of Bahamians. (According to UN data, The Bahamas has a Gini coefficient of 0.57, the highest in the Caribbean.[3]) Also, I’ve never heard anyone talk about recruiting Bahamians into the EA movement or EA orgs.
The Bahamas—Wikipedia
Every EA community survey ever, e.g. EA Survey 2020
Inequality in the Bahamas—Etonomics
I think it can sometimes fee a bit brutal to be downvoted with no explanation. I might say the Bahamas was glad to have FTX there and it’s kind of patronising to deny them that opportunity because of their poverty and make it worse, right?
I get the sense FTX was actually giving quite a lot the Bahamas, though clearly not now and also unclear how much of that was corruption.
I disagree because I would only count something as neocolonialism if there was a strong argument that it was doing net harm to the local population in the interest of the ‘colonisers’.
I mean, it plausibly did cause net harm to the Bahamas in this case, even if that wasn’t what people expected.
It seems to me that you’d be better off arguing that an event in the Bahamas causes harms to Bahamians directly, instead of drawing an analogy with colonialism. See The noncentral fallacy—the worst argument in the world?
(I’m not trying to be dismissive—I think there are ways to make this argument, perhaps something like: “observing a retreat full of foreigners will cause Bahamians to experience resentment and a reduced sense of self-determination; those are unpleasant things to experience, and could also cause backlash against the EA movement”. My claim is just that talking about harms directly is a better starting point for discussion.)
I have lots of things I want to say, but I will not say it publicly, because I’m currently working for a project that is dependent on EA Funding. I can’t risk that. Even though I think this conversation is important, I think it is even more important that I can continue the work that I’m doing with that org, and similar orgs is similar situations. And I can’t post anonymously, because I can’t explain what I want to say without referring to specific anecdotes that will identify me.
My sense is (from the votes on this post) is that most of these reforms are not broadly popular. Which while it doesn’t undermine them in my opinion does create a contradiction for the authors.
The authors believe that the EA Forum is profoundly antidemocratic (e.g. because of the karma weighting and selection effects of who is on the forum), so I don’t think they would consider upvotes to be particularly strong evidence of democratic will.
Yeah seems pretty accurate. I guess I agree with this like 70%.
Though these characteristics are over represented in EA, I think one should be careful about claiming overall majorities. According to the 2020 EA survey, EA is 71% male and 76% white. I couldn’t quickly find the actual distribution of EA income, but eyeballing some graphs here and using $100,000 household income as a threshold (say $60,000 individual income) and $600k household upper bound (upper class is roughly the 1% top earners), I would estimate around one third of EAs would be upper middle class now. But I think your point was that they came from an upper-middle-class background, which I have not seen data on. I would still doubt it would be more than half of EAs, so let’s be generous and use that. Using your list above of analytic philosophy, mathematics, computer science, or economics, that is about 53% of EAs (2017 data, so probably lower now). If these characteristics were all independent, that would indicate the product of about 14% of EAs would have all these characteristics. Now there is likely positive correlation between these characteristics, but I believe by definition that with the numbers above, it can’t the exceed the 50% upper middle class, even if all of those happen to be male, white, and those majors.
Quick thoughts on Effective Altruism and a response to Dr. Sarah Taber’s good thread calling EA a cult (which it functionally is).
I have believed EA thinking for about 8 years, and continue to do so despite the scandals. HOWEVER, I agree fully with this part of “Doing EA Better” [2], which I’ll call “the critique” from now on. : “Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights”.
As a leftist who wants to see the end of capitalism, cops, climate change, wealth inequality, and the dooming of entire nations to death and despair to uphold a white supremacist order, I am not particularly attached to EA or its specific GiveWell top charities. I think EA works best when it’s only an attempt to be “less wrong”, and always iterating through a SCIENTIFIC process.
Without having spent much time in the weeds, I think there’s a strong moral case that a GiveWell cause (say, mosquito nets) is superior to St. Jude’s or the Red Cross. Keep in mind that EA is a small minority of all charitable giving, and all charitable mindshare (you’ve never seen a bell ringer for EA). The charities you see in public and are asked to contribute to at checkout are almost objectively less efficient at turning dollars into lives saved or vastly improved than charities that quietly work in the poorest parts of the world. EA is having a moral crisis in a way that the Catholic Church has never meaningfully had to face.
What’s odd is that EA ends up supporting billionaire philanthropy as a de facto good, rather than the best option we have under the current system which we should also be trying to dismantle. For about a year now I’ve had a post kicking around in my head that there’s no EA interest in putting numerical bounds on the value of, for example, a strong tenant movement, the end of mass incarceration, a strong labor movement, the end of the drug war, the end of war in general. EA is tainted with centrist stink that looks at problems which are “controversial” in mass media and therefore *must* be actually controversial (ex. Palestine-Israel). Getting more people involved in “doing good” seems like an obvious answer yet most EA solutions seem to be power-flattering answers that want you to make a lot of money and donate it to EA, while making 0 effort to convince your friends or society.
As someone with a statistics degree, EA falls for the same “data wonk” trap a lot of wonks fall into—difficulty to quantify leads you to focusing on quantifiable areas, which are usually correlated with power and the status quo. EA is a very young movement yet they lack the historical knowledge to realize people were once very sure the best thing to do with mentally unwell people was lock them in an asylum for their lives. (The critique calls these last two paragraphs “EA orthodoxy”.)
I’m going to skim through the critique and add stuff where I think it matters.
I’ve avoided participating in EA because I’m not at a point where I’m doing significant philanthropy, and the heavy vibe of all the posts in the community is you need a bundle of keywords to participate. Words like epistemology, prior, Bayesian (I know this one, but seemingly not in the way people use it), utilitarian, objectively. I have a 4 year degree and can’t participate, so we know that around 90% of the world population cannot participate in EA forums. I feared getting torn to shreds for making the point which should be obvious- how do we know what 3 generations from now will want, let alone 3 million? I am especially unconvinced that intelligence in the universe is an automatic good; nor do I believe that 10^1000 intelligent minds is inherently better than 10^10.
Anyone who hasn’t seen bitcoin specifically and 99.9% of all crypto as a full stop ponzi scheme does not deserve the descriptor “rational”. SBF should’ve been viewed as a useful con artist’s mark; EA should’ve been happy to take his money but assumed it was going to collapse. There are dozens of very rational takedowns of crypto out there.
100% this: “At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.” Centrists hate emotion, and if you’re emotional you have already lost the argument. I have a strong emotional response to people being locked in cages for stealing baby formula, and because of that I am a loser in the eyes of EA. (Does emotion not guide deworming initiatives? Or are EAs just happy to make a number go up? I can’t tell)
“When Stuart Russell argues that AI could pose an existential threat to humanity, he is held up as someone worth listening to –”He wrote the book on AI, you know!” However, if someone of comparable standing in Climatology or Earth-Systems Science, e.g. Tim Lenton or Johan Rockström, says the same for their field, they are ignored, or even pilloried.[39] Moderate statements from the IPCC are used to argue that climate change is “not an existential risk”, but given significant expert disagreement among experts on e.g. deep learning capabilities, it seems very unlikely that a consensus-based “Intergovernmental Panel on Artificial Intelligence” would take a stance anything like as extreme as that of most prominent EAs. This seems like a straightforward example of confirmation bias to us. ”
EAs care a lot about pandemic risk. They were proven right with COVID. What they are still unable to reckon with is the inability of the science to affect mass change. 5% of the world(?) has had COVID, we have no idea if that will lower those people’s life expediencies or QALYs but doctors say it probably will, it has conservatively caused $10 trillion in damage and killed ~7 million people, and ruined most people’s entire 2020 if not continues to ruin their lives on a day to day basis. And yet most of the western world has done very little to contain the disease, regardless of political affiliation or economic status. People called COVID a “baby pandemic” or “baby catastrophe”, a trial run for how humanity will deal with a much harder threat. We completely failed.
Let’s say you had a billion dollars to address “pandemic risk” in the world. Could you actually meaningfully reduce pandemic risk? I’m not sure you can—the reasons for this are heavily aligned with leftist thought, so of course the centrists in EA have no vocabulary to deal with it. (Things to consider are capitalism, white supremacy, the rich at Davos have rigorous covid mitigation in place and therefor do not care that the poor at Walmart do not, the rich control the mass media and their wealth depends on people continuing to shop...)
On twitter I follow the likes of health reporter Helen Branswell, and she retweets mainstream scientists from ex. Stanford, MIT, Harvard; and most health reporters, experts, and scientists cannot fathom the voluntary preventable mass death and disabling event that COVID continues to be. They were told, like many EA practitioners, that “politics” is bad, and we just need “data and science”, and fail to realize that even showing up to work is political, not saying “the government is a death cult” is political, not saying “the hospital I work at is forcing nurses to work 12 hour shifts with crappy COVID mitigation, to save the lives of patients who don’t believe they have COVID yet are infecting the same nurses, and these nurses deserve fair hazard pay for these horrible conditions” is political. If you worked remote for UPMC (largest hospital network in PA, US), you were offered a COVID vaccine before the contractor nurses and janitors who actually walked into COVID hospitals every day. This is a class issue, like it or not, and dumping a billion dollars into it won’t solve class.
One thing that’s been nipping at me that I haven’t seen anyone in EA say—fraud is not inherently bad. I would happily defraud a billionaire who made their fortune grinding up human babies. It seems like EA distanced themselves so thoroughly from SBF because it made the movement look bad and fraudulent, not that the actions were themselves bad. In SBF’s case, I don’t agree with defrauding regular people. But if some moronic 10 millionaire lost 5 of those millions by putting money into something that reeked of a ponzi scheme with no actual value, I’m not losing sleep over that. You could even argue it’s a net good if say, 50% of that ended up donated to EA, and that millionaire wouldn’t have donated it otherwise. I’m not sure if I will get banned for saying this, which is asinine in a movement which will readily praise you letting literal living humans today starve so that we can maybe grow more brains in a jar in 10 million years. If you ban me you’re literally admitting that stealing money is worse than letting someone die.
Most centrists believe whatever elites have defined as “crime” is bad, which makes them fit right into the status quo. Centrists don’t question why nobody is prosecuted for clean water act violations or wealthy tax fraud. Every shoplifting prosecution should give you pause: “objectively” the damage of elite crimes is far worse. https://www.yalelawjournal.org/forum/the-punishment-bureaucracy
I like the callout of “theories of change” and “Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality”. Poisonous funders are, IMO: any recreational drug (drugs are fine but drug companies are not), fast food and hard-to-define junk food, advertising, policing/prosecution/jailing/surveillence/support thereof (Palantir and ICE), 99.9% of crypto (when a good crypto comes out, we’ll know), fossil fuels, cars, probably a supermajority of animal products companies (and I’m not vegetarian).
[1] https://twitter.com/sarahtaber_bww/status/1617194799261487108
[2] https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1
If you do get to writing the post you probably want to include that mass incarceration was something Open Phil looked into in detail and spent $130M on before deciding in 2021 that money to GiveWell top charities went farther. I’d be very interested to read the post!
Making money to donate hasn’t been a top recommendation within EA for about five years: it still makes sense for some people, but not most.
When you say “donating to EA” that’s ambiguous between “donating it to building the EA movement” and “donating it to charities that EAs think are doing a lot of good”. If you mean the latter I agree with you (ex: see what donation opportunities GWWC marks as “top rated”).
When people go into this full time we tend to say they work in community building. But that implies more of an “our goal is to get people to become EAs” than is quite right—things like the 80k podcast are often more about spreading ideas than about growing the movement. And a lot of EAs do this individually as well: I’ve written hundreds of posts on EA that are mostly read by my friends, and had a lot of in-person conversations about the ideas.
Effectively addressing risk from future pandemics wouldn’t look like “spend a lot more money on the things we are already doing”. Instead it would be things like the projects listed in Concrete Biosecurity Projects (some of which could be big) or Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics. (Disclosure: I work for a project that’s on both those lists).
Personally my donations to deworming haven’t been guided by my emotional reaction to parasites. My emotions are just not a very good guide to what most needs doing! I’m emotionally similarly affected by harm to a thousand people as a million: my emotions just aren’t able to handle scale. Emotions matter for motivation, but they’re not much help (and can hurt) in prioritization.
You also write both:
And then later on:
These seem like they’re in conflict?
I think the best way to discuss these criticisms is on a case by case basis, so I’ve put a number of them in a list here:
https://forum.effectivealtruism.org/posts/fjaLYSNTggztom29c/possible-changes-to-ea-a-big-upvoted-list
“EA should cut down its overall level of tone/language policing”.
Strongly agree.
EAs should be more attentive to how motivated reasoning might affect tone / language policing.
You‘re probably more likely to tone / language police criticism of EA rather than praise, and you’re probably less likely to seriously engage with the ideas in the criticism if you are tone / language policing.
I would like to see no change or a slight increase in our amount of overall tone policing. There might be specific forms of tone policing we should have less of, but in general I think civil discourse is one of the main things that make EA functional as a movement.
I agree. I think that it’s incredibly difficult to have civil conversations on the internet, especially about emotionally laden issues like morality/charity.
I feel bad when I write a snotty comment and that gets downvoted, and that has a real impact on me being more likely to write a kind argument in one direction rather than a quick zinger. I am honestly thankful for this feedback on not being a jerk.
I think the point regarding epidemics, and how EA excessively focuses on the individual aspects of good epistemics rather than the group aspect, is a really good point which I have surprisingly never heard before.
In thinking about the democratization—what role would real people impacted by the organization’s decisions play? This is probably mostly related to global health & development—but it seems very strange to democratize to “effective altruists” but not to mention involving the people actually impacted by an EA organization’s work. I don’t think “democratizing” per se will probably be the right way to involve them—but finding some way of gathering the insights and perspectives of people impacted by each organizaiton’s work would help with the expertise, epistemics, and power problems mentioned.
I disagree with this: I may have missed a section where you seriously engaged with the arguments in favor of the current karma-weighted vote system, but I think there are pretty strong benefits of a system that puts value on reputation. For example, it seems fairly reasonable that the views of someone who has >1000 karma are considered with more weight than someone who just created an account yesterday or who is a known troll with −300 karma.
I think there are some valid downsides to this approach, and perhaps it would be good to put a tighter limit on reputation weighting (e.g., no more than 4x weight), but “one person one vote” is a drastic rejection of the principle of reputation, and I’m disappointed with how little consideration was apparently given to the potential negatives of this reform / positives of the current system.
Great fun post!
I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That’s fine, but my last name is spelled “Scales” not “Scale”. :)
About scout mindset and group epistemics in EA
No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.
Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistemic rationality. Individual epistemic rationality remains valuable. Whether in service of group epistemics or not. Scout mindset is a keeper. EA suffers soldier mindset, as you repeatedly identified but not by name. Soldier mindset hinders group epistemics.
We are lucky. Julia Galef has a “grab them by the lapel and shake them” interest in intellectual honesty. EA needs scout mindset.
Focus on scout mindset supports individual epistemics. Yes.
scout mindset
critical thinking skills
information access
research training
domain expertise
epistemic challenges
All those remain desirable.
Epistemic status
EA’s support epistemic status announcements to serve group epistemics. Any thoughts on epistemic status? Did I miss that in your post?
Moral uncertainty
Moral uncertainty is not an everyday problem. Or remove selfish rationalizations. Then it won’t be. Or revisit the revised uncertainty, I suppose.
Integrity
Integrity combines:
intellectual honesty
introspective efficacy
interpersonal honesty
behavioral self-correction
assess->plan->act looping efficacy
Personal abilities bound those behaviors. So do situations. For example, constantly changing preconditions of actions bound integrity. Another bound is your interest in interpersonal honesty. It’s quite a lever to move yourself through life, but it can cost you.
Common-sense morality is deceptively simple
Common-sense morality? Not much eventually qualifies. Situations complicate action options. Beliefs complicate altruistic goals. Ignorance complicates option selection. Internal moral conflicts reveal selfish and altruistic values. Selfishness vs altruism is common-sense moral uncertainty.
Forum karma changes
Yes. Lets see that work.
Allow alternate karma scoring. One person one vote. As a default setting.
Allow karma-ignoring display. On homepage. Of Posts. And latest comments. As a setting.
Allow hide all karma. As a setting.
Leave current settings as an alternate.
Diversifying funding sources and broader considerations
Tech could face lost profits in the near future. “Subprime Attention Crisis” by Tim Hwang suggests why. An unregulated ad bubble will gut Silicon Valley. KTLO will cost more, percentage-wise. Money will flow to productivity growth without employment growth.′
Explore income, savings, credit, bankruptcy and unemployment trends. Understand the implications. Consumer information will be increasingly worthless. The consumer class is shrinking. Covid’s UBI bumped up Tech and US consumer earnings temporarily. US poverty worsened. Economic figures now mute reality. Nevertheless, the US economic future trends negatively for the majority.
“Opportunity zones” will be a predictive indicator despite distorted economic data, if they ever become reality. There are earlier indicators. Discover some.
Financial bubbles will pop, plausibly simultaneously. Many projects will evaporate. Tech’s ad bubble will cost the industry a lot.
Conclusion
Thanks again for the post. I will explore the external links you gave.
I offered one suggestion (among others) in a red team last year: to prefer beliefs to credences. Bayesianism has a context alongside other inference methods. IBT seems unhelpful, however. It is what I advocate against, but I didn’t have a name for it.
Would improved appetite regulation, drug aversion, and kinesthetic homeostasis please our plausible ASI overlords? I wonder. How do you all feel about being averse to alcohol, disliking of pot, and indifferent to chocolate? The book “Sodium Hunger: The Search for a Salty Taste” reminds me that cravings can have a benefit, in some contexts. However, drugs like alcohol, pot, and chocolate would plausibly get no ASI sympathy. Would the threat of intelligent, benevolent ASI that take away interest in popular drugs (e.g ,through bodily control of us) be enough to halt AI development? Such a genuine threat might defeat the billionaire-aligned incentives behind AI development.
By the way, would EA’s enjoy installing sewage and drinking water systems in small US towns 20-30 years from now? I am reminded of “The End Of Work” by Jeremy Rifkin. Effective altruism will be needed from NGO’s working in the US, I suspect.
I noticed this sentence in a footnote:
I don’t think this post would be very useful—see genetic fallacy.
“EAs should assume that power corrupts”—strongly agree.
Or even just that power doesn’t redeem. I think sometimes I’ve assumed a level of perfection from leaders that I wouldn’t expect from friends.
I perceive this as a very good and thoughtful collection of criticism and good ideas for reform. It’s also very long and dense and I’m not sure how to best interact with it.
As a general note, when evaluating the goodness of a pro-democratic reform in a non-governmental context, it’s important to have a good appreciation of why one has positive feelings/priors towards democracy. One really important aspect of democracy’s appeal in governmental contexts is that for most people, government is not really a thing you consent to, so it’s important that the governmental structure be fair and representative.
The EA community, in contrast, is something you have much more agency to choose to be a part of. This is not to say “if you don’t like the way things are, leave,”—I am definitely pro-criticism/feedback—it’s just important to avoid importing wholesale one’s feelings towards democracy in governmental settings vs. settings like EA where people have more agency/freedom to participate, especially since democratic decision-making does have many disadvantages.
If anything, we should be afraid of any tendency to stigmatize consulting outside experts. If anything, it’d be preferable all effective altruists are afraid of discouraging consultation from outside experts.
If you’re also reading the “diversify funding sources” and thinking BUT HOW? In a post where I make some similar suggestions, I suggest via encouraging entrepreneurship-to-give:
https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve
Of course, that would probably be quite a few years out even if 100 people left tomorrow to do Ent-TG, so the timelines in the proposal would have to be moved back considerably.
Yep I think the timeline in the proposal is unrealistic
Diversity is always a very interesting word and it’s interesting that the call for more comes after two of the three scandals mentioned in the opening posts are about EA being diverse along an axis that many EAs disagree with.
Similarly, it’s very strange that a post that talks a lot about the problems of EAs caring too much about other people being value aligned and afterward talk in the recommendations about how there should be more scrutiny to checking whether funders are aligned with certain ethical values.
This gives me the impression that the main thesis of the post is that EA values differ from woke values and should be changed to be more aligned with woke values.
The post doesn’t seem to have any self-awareness about pushing in different axis. If your goal is to convince people to think differently about diversity or about the importance of value alignment, it would make sense to make arguments that are more self-aware.
To me, this looks like it mistakes why people hold the views that they do and strawman people.
Saying “Stuart Russell” is worth listening to because of his book boils down to “If you actually want to understand why AI is an existential threat to humanity, read his book it’s likely to convince you”. On the other hand, Tim Lenton or Johan Rockström have not written books that make arguments for the importance of climate change that seem convincing to many EAs.
When it comes to the topic of quantification, it seems that this post criticizes at the same time that EAs quantify everything and that they don’t quantify the value of paying community organizers relatively high salaries.
EAs seem to me, very willing to do a lot of thing, especially in the field of long-termism without quantification being central. Generally, EA thought leaders don’t tend to hold positions naive positions on topics like diversity or quantification but complex ones. If you want to change views you need to be more clear about cruxes and how you think about the underlying tradeofffs.
Apparently, the problem of reading too narrowly also applies to many scientific research fields. Derek Thompson writes:
Visavis peer review:
https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review
I appreciated “Some ideas we should probably pay more attention to”. I’d be pretty happy to see some more discussion about the specific disciplines mentioned in that section, and also suggestions of other disciplines which might have something to add.
Speaking as someone with an actuarial background, I’m very aware of the Solvency 2 regime, which makes insurers think about extreme/tail events which have a probability of 1-in-200 of occurring within the next year. Solvency 2 probably isn’t the most valuable item to add to that list; I’m sure there are many others.
Hi Sanjay, I’m actually working on a project on pluralism in XRisk, and what fields may have something to add to the discussion. Would you be up for a chat/put me in contact with people who would be up for a chat with me about lessons that can be learn from actuarial studies / Solvency 2?
Yes, we can arrange via DM
Nathan commenting atomistically is frustrating and I wish he’d put them all in one comment.
[aside, made me chuckle]
This is an inevitable issue with the post being 70 pages long.
I think online discussions are more productive when its clear exactly what is being proposed as good/bad, so I appreciate you separately commenting on small segments (which can be addressed individually) rather than the post as a whole.
Okay, but like they are all separate points. Putting them all into one comment means it’s much harder to signal that you like some but not others.
And likely yields less karma overall!
Sorry you think this is a play to get more karma?
No. I do think that combining the comments would yield less karma, which could be a bad thing and—in the spirit of this post—in need of being done better, thereby not saying anything about your intentions. And I agree with your reply to your comment: therefore the “and”. And I think what you say there is actually a very good reason, which also answers why I was reading all these distinct comments by you, which is in turn why I appreciated this one amongst them and responded. I’m sorry if it came across as an ad hominem attack instead! Best!
Thanks for the clarification :)
I was inspired by your post, and I wrote a post about one way I think grant-making could be less centralized and draw more on expertise. One commenter told me grant-making already makes use of more expert peer reviewers than I thought, but it sounds like there is much more room to move in that direction if grant-makers decide it is helpful.
https://forum.effectivealtruism.org/posts/fNuuzCLGr6BdiWH25/doing-ea-better-grant-makers-should-consider-grant-app-peer
I mostly agree with Buck’s comment and I think we should probably dedicate more time to this at EAGs than we have in the past (and probably some other events). I’m not sure what is the best format, but I think having conversations about it would allow us to feel it much more in our bones rather than just discussing it on the EA forum for a few days and mostly forget about it.
I believe that most of these in-person discussions seem to have mostly happened at the EA Leaders Forum, so we should probably change that.
That said, I’m concerned that a lot of people will be scared to give their opinion on some things (for a variety of reasons). There‘s some benefits to doing this in person, but there’s probably some benefits to making it anonymous too.
The right way to handle the suggested reforms section is to put them all as comments.
I will not be taking questions at this time. Sarcasm.
I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg—if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.
However, I think that saying the ‘and literally everyone’ part out loud is important. Usually when people say ‘X is bad at Y’ they mean that X is worse than typical at Y. If I said, ‘Detroit-style pizza is unhealthy,’ then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say ‘pizza is unhealthy’.
Likewise, when you say ‘EAs seem particularly bad at steelmanning criticisms,’ the Gricean implication is that EAs are worse at this than average. In another thread above, you seemed to imply that you aren’t familiar with communities that are better at incorporating and steelmanning criticism (please correct me if I’m mistaken here).
There is an important difference between ‘everyone is bad at taking criticism’/‘EAs and everyone else are bad at taking criticism’/‘EAs are bad at taking criticism’. The first two statements implies that this is a widespread problem that we’ll have to work hard to address, as the default is getting it wrong. The last statement implies that we are making a surprising mistake, and it should be comparatively easy to fix (as others are doing better than us).
I don’t generally like steelmanning, for reasons that this blog post does a decent job of summarizing. When folks read what I write, I’d rather that they assume that I thought about using a weaker or stronger version of a statement, and instead went with the strength I did because I believe it to be true.
If an issue is framed as black or white, and I believe it to be grey, then I assume we have a disagreement. I try to assume that if an author decided to frame an issue in a particular way, it’s because that’s what they believe to be true.
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity.
I think whether steelmanning makes sense depends on your immediate goal when reading things.
If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense.
If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea.
There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.
Thanks, I think this is an excellent response and I agree both are important goals.
I’m curious to learn more about why you think that steelmanning is good for improving one’s beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it.
I could see that perhaps you should try to do both (ie, both the author’s literal intent and whether they are directionally correct)?
[EDIT: I’m particularly curious because I think that my current understanding seems to imply that steelmanning like this would be hubristic, and I think that probably that’s not what you’re going for. So almost certainly I’m missing some piece of what you’re saying!]
I find writing pretty hard and I imagine it was quite a task to compile all of these thoughts, thanks for doing that.
I only read the very first section (on epistemic health) but I found it pretty confusing. I did try and find explanations in the rest of the epistemics section.
The footnotes and sources that you linked to don’t give me much evidence to update towards your position and saying “the science says x” (at least to me) implies that there is some kind of consensus view within the literature which I think you should be able to point to. This reads more like your hot takes rather than something you have thought about deeply. Superforecasting (footnote 4) does talk a bit about prediction markets but much more of the book is focused on how a few people with certain traits can beat most people at forecasting which I think runs counter to the point you are making so it seems misleading to me to link to it as if it supports your view.
I think it can be fine to give hot takes but I feel like the general vibe of the post was trying to persuade me rather than explain your view. Things that might have helped are focusing on a smaller set of points and trying to make a more rigorous case for them or communicating that you are not very confident in many of the key points if that is the case. I also felt like you were trying to make a claim that the ‘science’ supports your view—which based on the sources you linked to is really hard to verify.
I don’t think everything you wrote was clearly incorrect but in my view you made strong claims without demonstrating appropriate epistemic rigour.
Hi Caleb,
Our two main references are the Yang & Sandberg paper and the Critchlow book, both of which act as accessible summaries of the collective intelligence literature.
They’re linked just a little after the paragraph you quoted.
I think my issues with this response and linking to that paper are better explained by looking at this post from SSC (beware the man of one study). To be clear I think we can learn things from the sources you linked—my issue is with the (imo) overconfidence and claims about what “the science” says.
I haven’t been around here for long, but is this the record for most comments on a post? Must be close....
It’s a lot of comments, but there’s the collection of comments on this post, for instance, and likely others.
There’s a contradiction between this:
>If you believe in this community, you should believe in its ability to make its own decisions.
and these:
>Diverse communities are typically much better at accurately analysing …
>The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male in his twenties or thirties from
I’m all for transparency, but it’s not clear to me that normal “democracy” if it means “equal voting from the current EA constituency” is likely to make an improvement: as with some primary voting systems, it may lead to an even more narrow leadership culture.
A diverse representative “council” might be better, or something like citizen’s juries and people’s assemblies. Holocracy and Sociocracy would be well worth looking at, along with systemic concensing for ‘minor’ or ‘low consequence’ or short term decisions.
Participatory Learning and Action, participatory budgeting and participatory video are also very useful.
Indeed, only the Sith deal in absolutes.
Never in all my life have I seen someone go to so much trouble to disguise bad-faith criticism as good-faith criticism.
One of the things that this post has most clearly demonstrated is that EA has a terrible habit of liking, praising, and selectively reading really long posts to compensate for not reading most of it.
And as a result of not actually reading the entire thing, they never actually see how whacked the thing is that they just upvoted.
I think that we should expect diversity if our hiring practises are fair and so that there isn’t suggests we need to do more work. I sense that diversity leads to better decisionmaking and that’s what we should want. I am somewhat against “selecting for diversity” though open to the discussion.
What do you mean by “fair” in this context?
If we adopt some sort of meritocratic system based on knowledge/ability (which many would see as “fair”) I don’t think this would lead to very diverse hiring. I think it would lead to the hiring we currently see and which OP doesn’t like.
I tentatively think the only way to get diversity is to select for diversity. Diversity does have some instrumental benefit so not saying we shouldn’t do this, at least to some extent.
Having some expertise in complex systems (several certifications from the Santa Fe Institute) and also in deliberative democracy/collective intelligence, I can fully support what the authors of this post say about EA’s shortcomings in these areas. (I agree with most of the other points also.) The EA community would do well to put its most epistemically humble hat on and try to take these well-meant, highly articulate criticisms on board.
If anyone still thinks effective altruism is above conflicts of interest, I have an NFT of the Brooklyn Bridge to sell u, hmu on FTX if u r interested.
Yeah, seems right.
Lol, rekd.
This is funny and hits close to home, at least to me. Well written. Though my name isn’t Sam, it’s .. a different Old Testament prophet.
I have produce this answer to this post here:
https://forum.effectivealtruism.org/posts/KjahfX4vCbWWvgnf7/effective-altruism-governance-still-a-non-issue
Kind regards,
Arturo
The list of proposed solutions here are pretty illustrative of the Pareto Principle: 80% of the value comes from 20% of the proposed solutions.
Just saw the karma drop from 50 to 37 in one vote.
This section seems particularly important.
That doesn’t seem possible by one person; the max strong-upvote amount at the moment is 9. There’s no downvote-upvote combination that leads to a difference of 13 in the change of 1 vote (though it would be possible for someone to strongly upvote and then change their vote to a strong downvote for a karma drop of 14,16, 18). A more likely explanation is that multiple people downvoted while (multiple people-1) retracted their upvote within the time you were refreshing, or some kind of bug on the forum.
The r/SneerClub response is interesting
https://www.reddit.com/r/SneerClub/comments/10ent65/saw_this_and_thought_of_you_guys/
What do you like about it?
Yes!
The Effective Altruism movement is, in origin and at present, on a mission to benefit the powerless using the tools of the powerful; an injection of genuine compassion into the machinery of Capitalist Modernity.
It thus has precisely the advantages and limitations that you would expect. It is a truly impressive engine for the operationalisation of altruism, distributing malaria nets and deworming drugs with astonishing efficiency and effectiveness. Yet, it cannot conceive of solving problems rather than treating their symptoms, acts with the self-assured entitlement of a colonial administrator, and can never quite escape the stony gaze of the techno-modernist Leviathan.
At least it matches our Basilisk.
I see the EA Forum criticism-downvoters are out in force already
DO NOT EMAIL THIS ADDRESS WITH YOUR MAIN EMAIL ACCOUNT.
This is a recently-created account on the forum and is reasonably likely to be a sockpuppet account by Emile Torres, who has a history of dishonesty, harrassment, and infiltrating networks in order to make false accusations against people. Even if you are not aware of or worried about the risks, other people in your network might be.
The standard and obvious practice here is to use a throwaway email address to contact the email address listed here, or to contact them via the forum’s DM service.
I hate to say it, but I’m really quite sure Emile wouldn’t write a critique like this; it really doesn’t read at all like them. They also have a knack of being very public when writing critiques, and even telling people in advance of this. Their primary audience is certainly not EA. But I agree if people are worried then using a throwaway email is good practice!
This is actually not true at all. Emile Torres’s sockpuppet accounts on EAforum behave very differently from Emile Torres’s twitter account, and their primary audience in this mode is very much EA.
People should always use a throwaway email when communicating with an anonymous account.
Some of us actually had a little bet about how long it would be until someone accused us of being Emile Torres in disguise.
If you are an actual anonymous committee of 10 people like you claim, please delete the anonymous email address prominently displayed at the bottom of this post. Mining EAforum for people’s contact information is behavior that is virtually identical to Emile Torres’s recent stalking activity, and contacting people through DMs instead of mining for personal email addresses is the best way to verify that this post was made in good faith.
Nah, but I reckon I can guess a few.
one zillion words of EA critique, loads of it about conflicts of interest, and still no one mentions polyamory or amphetamines! This above all else really demonstrates the incredible strength of cultural taboos within EA.