I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are weak.
You begin by citing the Cowen quote that “EAs couldn’t see the existential risk to FTX even though they focus on existential risk”. I think this is one of the more daft points made by a serious person on the FTX crash. Although the words ‘existential risk’ are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn’t enough attention to existential risks to FTX and the implications this would have for EA. In contrast, EAs have put umpteen person hours into assessing existential risks to humanity and the epistemic standards used to do that are completely different to those used to assess FTX.
You cite research purporting to show that diversity of some form is good for collective epistemics and general performance. I haven’t read the book that you cite, but I have looked into some of this literature, and as one might expect for a topic that is so politically charged, a lot of the literature is not good, and some of the literature actually points in the opposite direction, even though it is career suicide to criticise diversity, and there are likely personal costs even for me discussing counter-arguments here. For example, this paper suggests that group performance is mainly determined by the individual intelligence of the group members not by things like gender diversity. This paper lists various costs of team diversity that are bad for collective dynamics. You say that diversity “essentially along all dimensions” is good for epistemics. This is the sort of claim that sounds good, but also seems to be clearly false. I seldom see people who make this argument suggest that we need more Trump supporters, religious fundamentalists, homophobes or people without formal education in order to improve our performance as a community. These are all huge chunks of the national/global community but also massively underrepresented in EA. There are lots of communities that are much more diverse than EA but which also seem to have far worse epistemics than EA. Examples include Catholicism, Trumpism, environmentalism, support of Bolsonaro/Modi etc.
Relatedly, I think value alignment is very important. I have worked in organisations with a mix of EA and non EA people and it definitely made things much harder than if everyone were aligned, holding other things equal. On one level, it is not surprising that a movement trying to achieve something would agree not just at a very abstract level, but also about many concrete things about the world. If I think that stopping AI progress is good and you think it is bad, it is going to be much harder (though not impossible, per moral trade) for us to achieve things in the world. Same for speeding up progress in virus synthesis. The 80,000 Hours articles on goal directed groups are very good on this.
I don’t agree that EA is hostile to criticism. In fact it seems unusually open to criticism, and rational discussion of ideas rather than dismissing them on the basis of vibe/mood affiliation/political amenability. Aside from the controversial Cremer and Kemp case (who didn’t publish pseudonymously) what are the major critiques that have been presented pseudonymously or have caused serious personal consequences for the critics? By your definition, I think my critique of GiveWell counts as deep, but I have been rewarded for this because people thought the arguments were good. To stress, mine and Hauke’s claim was that most of the money EA has spent has been highly suboptimal.
You say “For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people.” This isn’t just an assumption of EA, but a central finding of psychological science that things that are usually classed as intellectual abilities are strongly correlated—the g factor. eg maths ability is correlated with social science ability, and english literature ability etc.
I just don’t think it is true that we align well with the interests of tech billionaires. We’ve managed to persuade two billionaires of EA and one believed in EA before he became a billionaire. The vast majority of billionaires evidently don’t buy it and go off and do their own thing, mainly donating to things that sound good in their country, to climate change, or not donating at all. Longtermist EAs would like lots more money to spent on AI alignment, slowing down AI progress, on slowing down progress in virology or increasing spending on counter-measures, and on preventing major wars. I don’t see how any of these things promise to benefit tech founders as a particular constituency in any meaningful way. That being said, I agree that there is a problem with rich people becoming spokespeople for the community or overly determining what gets done and we need far better systems to protect against that in future. eg FTX suddenly deciding to do all this political stuff was a big break from previous wisdom and wasn’t questioned enough.
On a personal note, I get that I am a non-expert in climate, and so am wide open to criticism as an interloper (though I have published a paper on climate change). But then it is also true that getting climate people to think in EA terms is very difficult. Also, the view I recently outlined is basically in line with all climate economics. In that sense the view I hold and I think is widely held in longtermist EA is in line with one expert consensus. Indeed, it is striking that this is the one group that actually tries to quantify the aggregate costs of climate change. I also don’t think there are any areas where I disagree with the line taken by the IPCC which is supposed to express the expert consensus on climate. The view that 4ºC is going to kill everyone is one held by some activists and a small number of scientists. Either way we need to explain why we are ignoring all the climate economists and listening to Rockstrom/Lenton instead. On planetary boundaries, as far as I know, I am the only EA to have criticised planetary boundaries, and I don’t dismiss it in passing, but in fact at considerable length. The reviewer I had for that section is a prof and strongly agreed with me.
Differential tech progress has been subject to peer review. The Bostrom articles on it are peer reviewed.
The implications of democratising EA are mindboggling. Suppose that Open Phil’s spending decisions are made democratically by EAs. This would put EAs in charge of ~$10bn. We’d then need to decide who counts as an EA. Because so much money would be on the table, lots of people who we wouldn’t class as EAs would want a say, and it would be undemocratic to exclude them (I assume). So, the ‘EA franchise’ would expand to anyone who wants a say (?) I don’t know where the money would end up after all this, but it’s fair to say that money spent on reducing engineered pandemics, AI and farm animal welfare would fall from the current pitiful sum to close to zero.
You say that worker self-management has been proven to be better for mission-oriented work than top-down rule. This is clearly false. There is a tiny pocket of worker cooperatives (eg in the Basque region) who have been fairly successful. But almost all companies are run oligarchially in a top-down fashion by boards or leadership groups.
Overall, we need to learn hard lessons from the FTX debacle. But thus far, the collapse has mainly been used to argue for things that are completely unrelated to FTX, and mainly to an advance an agenda that has been disfavoured in EA so far, and with good reason. For Cowen, this was neoliberal progress, here it is left wing environmentalism.
I agree with most of your points, but strongly disagree with number 1 and surprised to have heard over time that so many people thought this point was daft.
I don’t disagree that “existential risk” is being employed in a very different sense, so we agree there, in the two instances, but the broader point, which I think is valid, is this:
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
Of course, the people who sank untold hours into existential risk research aren’t to blame, and it isn’t an argument against x-risk/longtermist work, but it does show that EA, as a community missed something dire and critical and importantly something that couldn’t be closer to home for the community. And in my opinion that does shed light on how successful one should expect the longer term endeavours of the community to be.
Scott Alexander, from “If The Media Reported On Other Things Like It Does Effective Altruism”:
Leading UN climatologist Dr. John Scholtz is in serious condition after being wounded in the mass shooting at Smithfield Park. Scholtz claims that his models can predict the temperature of the Earth from now until 2200 - but he couldn’t even predict a mass shooting in his own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves in the present?
The difference in that example is that Scholtz is one person so the analogy doesn’t hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/longterm risks to the movement.
I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example.
At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.
I think that is a charitable interpretation of Cowen’s statement: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.”
I think charitably, he isn’t saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?
I think I just don’t agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:
Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.
I think that’s plain wrong, and Cowen actually is doing the cheap rhetorical trick of “existential risk in one context equals existential risk in another context”. I like Cowen normally, but IMO Scott’s parody is dead on.
“EA didn’t spot the risk of FTX and so they need better PR/management/whatever” would be fine, but I don’t think he was saying that.
Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational person’s belief in the output of what EA has to offer and also downgrade the trust they are getting it right.
Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like “daft”.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re John’s point 1.
If you’re open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen’s post to explain why I disagreed with his point:
I don’t find Tyler’s point very persuasive: Despite the fact that the common sense interpretation of the phrase “existential risk” makes it applicable to the sudden downfall of FTX, in actuality I think forecasting existential risks (e.g. the probability of AI takeover this century) is a very different kind of forecasting question than forecasting whether FTX would suddenly collapse, so performance at one doesn’t necessarily tell us much about performance on the other.
Additionally, and more importantly, the failure to anticipate the collapse of FTX seems to not so much be an example of making a bad forecast, but an example of failure to even consider the hypothesis. If an EA researcher had made it their job to try to forecast the probability that FTX collapses and assigned a very low probability to it after much effort, that probably would have been a bad forecast. But that’s not what happened; in reality EAs just failed to even consider that forecasting question. EAs *have* very seriously considered forecasting questions on x-risk though.
So the better critique of EAs in the spirit of Tyler’s would not be to criticize EA’s existential risk forecasts, but rather to suggest that there may be an existential risk that destroys humanity’s potential that isn’t even on our radar (similar to how the sudden end of FTX wasn’t on our radar). Others have certainly talked about this possibility before though, so that wouldn’t be a new critique. E.g. Toby Ord in The Precipice put “Unforeseen anthropogenic risks” in the next century at ~1 in 30. (Source: https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/some-thoughts-on-toby-ord-s-existential-risk-estimates). Does Tyler think ~1 in 30 this century is too low? Or that people haven’t spent enough effort thinking about these unknown existential risks?
You made a further point, Devon, that I want to respond to as well:
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
I agree with you here. However, I think the hubris was SBF’s hubris, not EAs’ or longtermists-in-general’s hubris.
I’d even go further to say that it wasn’t the Future Fund team’s hubris.
As John commented below, “EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.”
But that’s a critique of the Future Fund’s (and others’) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I don’t even consider the Future Fund team’s failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fund’s Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EA’s and longtermists’ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as I’m aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I don’t fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoples’ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification:
“Leading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldn’t even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves or those nearby in the present?”
Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund team’s domain expertise on EA money, like something they shouldn’t be able to miss.
Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right?
I shared the modification with an EA that—like me—at first agreed with Cowen. Their response was something like “OK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressed—but I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if it”
However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they can’t work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, assume climatology research benefits greatly from climatologists getting close to hurricanes then we should expect climatologists, as a group, to see more hurricane wounds. In that case we should update, but not as strongly, if climatologists get hurricane wounds.
Ultimately I updated from agree with Cowen to disagree with Cowen after thinking this through. I’d be curious if and where you disagree with this.
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of “existential risk” (which I think is a semantics game but others disagree)?
Imagine a forecaster that you haven’t previously heard of told you that there’s a high probability of a new novel pandemic (“pigeon flu”) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.
With a quote like
Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus “existential risk to FF” is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they’ve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). It’s embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it’s embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase “existential risk” is just a semantics game tying them together (in the same way that “why would I trust the Bayesian updates in your pigeon flu forecasting when you’ve made elementary math errors in a Bayesian statistics paper” is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you don’t have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread—tabooing “existential risk” and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as “longtermist” didn’t last a year!
Funnily enough, the “pigeon flu” example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try “taking the hypothesis that EA...” and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
I don’t think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesn’t update my assessment of his views on climate x-risk that much. In contrast, if the climate scientist’s organization built its headquarters in a flood plain and didn’t buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them.
In contrast, EA (and the FF in particular) asserts/ed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under one’s nose would count. While I don’t think “existential risk in one context equals existential risk in another context,” I don’t think the past performance has no bearing on estimates of future performance either.
I think assessing the extent to which the “miss” on FTX should cause a reasonable observer to downgrade EA’s x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasn’t even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hindsight, that is only a minor demerit for me.
To perhaps make it clearer: I think EA is trying to be expert in “existential risks to humanity”, and that really does have almost no overlap with “existential risks to individual firms or organizations”.
Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. You’d need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially hold—the death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesn’t work for me.
More generally, I think it’s probably reasonable to downgrade for missing FTX on “general competence” and “ability to predict and manage risk” as well. I think both of those attributes are correlated with “ability to predict and manage existential risk,” the latter more so than the former. Given that existential-risk expertise is a difficult attribute to measure, it’s reasonable to downgrade when downgrading one’s assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isn’t nearly as heavily loaded on “ability to predict and manage existential risk.” It’s primarily loaded on domain-specific expertise in climate science, and missing FTX wouldn’t make me think materially less of the relevant people as scientists.
To be clear, I’m not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldn’t come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one.
Scott’s analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It’s not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it’s not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
This. We can taboo the words “existential risk” and focus instead on Longtermism. It’s damning that the largest philanthropy focused on Longtermism—the very long term future of humanity—didn’t even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
You can’t build a temple that lasts 1000 years without first ensuring that it’s on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important.
From that perspective I’m not sure that institutional longevity is really a priority and certainly don’t think that we can infer that longtermists aren’t indeed focused on the long term.
Longtermism is wider than catastrophic risk reduction—e.g. it also encompasses “trajectory changes”. It’s about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn’t done great as a brand so far.)
Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
These things are related, and must be treated as such.
Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.
We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this.
On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.
This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.
We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.
If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g. here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,
In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.
Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.
We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*
We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying.
Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk.
We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change.
Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.
Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.
Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.
Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
I agree that we don’t want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn’t be ‘democratic’ in any meaningful sense.
I don’t have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word ‘existential risk’ doesn’t change that fact.
Since you don’t want diversity essentially along all dimensions, what sort of diversity would you like? You don’t want Trump supporters; do you want more Marxists? You apparently don’t want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase.
The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call ‘TUA’ (which I think is a bit of a silly label—how is it techno-utopian to think we’re all going to be killed by technology?) is actually good, then value alignment on it seems good.
You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It’s somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously. I don’t think it was rational for you to publish the post pseudonymously—I don’t think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I’m not a researcher at a core EA org any more.)
I don’t think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn’t heard of any of the authors of the interest rate piece before.
The main example you give is the reception to the Cremer and Kemp pice, but I haven’t seen any evidence that they did actually get the reception they claimed.
I’m not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states “There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g.”
“In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.” I don’t think this gambit is open to you—your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding.
Of course, there are more things to life and to performance at work than intelligence.
As I mentioned in my first comment, it’s not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don’t help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. “the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.” One of the headings in your article is “We align suspiciously well with the interests of tech billionaires (and ourselves)”. I don’t see how anything you have said here is a good defence against my criticism of that claim.
There’s a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario.
A lot of what you say in your response and in your article seems inconsistent—you make a point of saying that EAs ignore the experts but then dismiss the experts when that happens to be inconsistent with your preferred opinions. Examples:
Defending postcolonialism in global development
Your explanation of why Walmart makes money vs mainstream economics.
Your dismissal of all climate economics and the IPCC
‘Standpoint theory’ vs analytical philosophy
Your dismissal of Bayesianism, which doesn’t seem to be aware of any of the main arguments for Bayesianism.
Your dismissal of the g factor, which doesn’t seem to be aware of the literature in psychology.
The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
Your defence of critical realism
Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. “For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C − 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.”
Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It’s kind of true but a bit misleading to say that this ‘could’ lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like “humanity is a virus” are environmentalists.
Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don’t actually produce any quantitative evidence and it’s completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There’s nothing missing in not modelling the tails here.
ok
What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it’s still not very democratic—as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something?
We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn’t give it money, and I doubt Moskovitz would either. I’m not sure what your proposal is for what we’re supposed to do after this happens.
I actually think corporations are involved in collaborative mission-driven work, and your Mondragon example seems to grant this, though perhaps you are understanding ‘mission’ differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said “Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule”. The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person’s influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions.
“Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.” Evidently from the comments I’m not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment.
If you had spent the post talking about free markets, economic growth and admonishing the woke, I think people would have taken away a different message, but you didn’t do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don’t think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don’t want more right wing people. At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good.
A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.
‘it is career suicide to criticise diversity’ This seems seriously hyperbolic to me, though I agree that if your down diversity, a non-negligible number of people will disapprove and assume you are right-wing/racist, and that could have career consequences. What’s your best guess as to the proportion of academics who have had their careers seriously damaged for criticizing diversity in the fairly mild way you suggest here (i.e. that as a very generic thing, it does not improve accuracy of group decision-making), relative to those who have made such criticisms?
Strong agree with most of these points; the OP seems to not… engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?
EDIT: Oh! It was rockstrom, but the actual quote is: “The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3” from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change.
There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you’re interested. The accompanying report is available here.
The quote is:
“Action on climate change is a matter of intra- and intergenerational justice, because climate change impacts already have affected and continue to affect vulnerable people and countries who have least contributed to the problem (Taconet et al., Reference Taconet, Méjean and Guivarch2020). Contribution to climate change is vastly skewed in terms of wealth: the richest 10% of the world population was responsible for 52% of cumulative carbon emissions based on all of the goods and services they consumed through the 1990–2015 period, while the poorest 50% accounted only for 7% (Gore, Reference Gore2020; Oswald et al., Reference Oswald, Owen, Steinberger, Yannick, Owen and Steinberger2020).
A just distribution of the global carbon budget (a conceptual tool used to guide policy) (Matthews et al., Reference Matthews, Tokarska, Nicholls, Rogelj, Canadell, Friedlingstein, Thomas, Frölicher, Forster, Gillett, Ilyina, Jackson, Jones, Koven, Knutti, MacDougall, Meinshausen, Mengis, Séférian and Zickfeld2020) would require the richest 1% to reduce their current emissions by at least a factor of 30, while per capita emissions of the poorest 50% could increase by around three times their current levels on average (UNEP, 2020). Rich countries’ current and promised action does not adequately respond to the climate crisis in general, and, in particular, does not take responsibility for the disparity of emissions and impacts (Zimm & Nakicenovic, Reference Zimm and Nakicenovic2020). For instance, commitments based on Nationally Determined Contributions under the Paris Agreement are insufficient for achieving net-zero reduction targets (United Nations Environment Programme, 2020).”
Whether 1.5 is really in reach anymore is debatable. We’re approaching an El Nino year, it could be a big one, we could see more heat in the atmosphere then, let’s see how close we get to 1.5 GAST then. It won’t be a true GAST value, I suppose, but there’s no way we’re stopping at 1.5 according to Peter Carter:
“This provides more conclusive evidence that limiting to 1.5C is impossible, and only immediate
global emissions decline can possibly prevent a warming of 2C by 2050”
and goes on from there.… He prefers CO2e and radiative forcing rather than the carbon budget approach as mitigation assessment measures. It’s worth a viewing as well.
There’s quite a lot to unpack in just these two sources, if you’re interested.
Then there’s Al Gore at the World Economic Forum, who drops some truth bombs: “Are we going to be able to discuss… or putting the oil industry in charge of the COP … we’re not going to disguise it anymore”
OLD:I believe it was Rockstrom, though I’m looking for the reference, who said that citizens of developed countries needed to cut their per capita carbon production by 30X, while in developing countries people could increase it by 3X. That’s not a quote, but I think the numbers are right.
That is a counterpoint to the analysis made by some climate economists.
When I find the reference I’ll share it, because I think he was quoting an analysis from somewhere else, and that could be useful to your analysis given the sources you favor, even if you discount Rockstrom.
I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are weak.
You begin by citing the Cowen quote that “EAs couldn’t see the existential risk to FTX even though they focus on existential risk”. I think this is one of the more daft points made by a serious person on the FTX crash. Although the words ‘existential risk’ are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn’t enough attention to existential risks to FTX and the implications this would have for EA. In contrast, EAs have put umpteen person hours into assessing existential risks to humanity and the epistemic standards used to do that are completely different to those used to assess FTX.
You cite research purporting to show that diversity of some form is good for collective epistemics and general performance. I haven’t read the book that you cite, but I have looked into some of this literature, and as one might expect for a topic that is so politically charged, a lot of the literature is not good, and some of the literature actually points in the opposite direction, even though it is career suicide to criticise diversity, and there are likely personal costs even for me discussing counter-arguments here. For example, this paper suggests that group performance is mainly determined by the individual intelligence of the group members not by things like gender diversity. This paper lists various costs of team diversity that are bad for collective dynamics. You say that diversity “essentially along all dimensions” is good for epistemics. This is the sort of claim that sounds good, but also seems to be clearly false. I seldom see people who make this argument suggest that we need more Trump supporters, religious fundamentalists, homophobes or people without formal education in order to improve our performance as a community. These are all huge chunks of the national/global community but also massively underrepresented in EA. There are lots of communities that are much more diverse than EA but which also seem to have far worse epistemics than EA. Examples include Catholicism, Trumpism, environmentalism, support of Bolsonaro/Modi etc.
Relatedly, I think value alignment is very important. I have worked in organisations with a mix of EA and non EA people and it definitely made things much harder than if everyone were aligned, holding other things equal. On one level, it is not surprising that a movement trying to achieve something would agree not just at a very abstract level, but also about many concrete things about the world. If I think that stopping AI progress is good and you think it is bad, it is going to be much harder (though not impossible, per moral trade) for us to achieve things in the world. Same for speeding up progress in virus synthesis. The 80,000 Hours articles on goal directed groups are very good on this.
I don’t agree that EA is hostile to criticism. In fact it seems unusually open to criticism, and rational discussion of ideas rather than dismissing them on the basis of vibe/mood affiliation/political amenability. Aside from the controversial Cremer and Kemp case (who didn’t publish pseudonymously) what are the major critiques that have been presented pseudonymously or have caused serious personal consequences for the critics? By your definition, I think my critique of GiveWell counts as deep, but I have been rewarded for this because people thought the arguments were good. To stress, mine and Hauke’s claim was that most of the money EA has spent has been highly suboptimal.
You say “For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people.” This isn’t just an assumption of EA, but a central finding of psychological science that things that are usually classed as intellectual abilities are strongly correlated—the g factor. eg maths ability is correlated with social science ability, and english literature ability etc.
I just don’t think it is true that we align well with the interests of tech billionaires. We’ve managed to persuade two billionaires of EA and one believed in EA before he became a billionaire. The vast majority of billionaires evidently don’t buy it and go off and do their own thing, mainly donating to things that sound good in their country, to climate change, or not donating at all. Longtermist EAs would like lots more money to spent on AI alignment, slowing down AI progress, on slowing down progress in virology or increasing spending on counter-measures, and on preventing major wars. I don’t see how any of these things promise to benefit tech founders as a particular constituency in any meaningful way. That being said, I agree that there is a problem with rich people becoming spokespeople for the community or overly determining what gets done and we need far better systems to protect against that in future. eg FTX suddenly deciding to do all this political stuff was a big break from previous wisdom and wasn’t questioned enough.
On a personal note, I get that I am a non-expert in climate, and so am wide open to criticism as an interloper (though I have published a paper on climate change). But then it is also true that getting climate people to think in EA terms is very difficult. Also, the view I recently outlined is basically in line with all climate economics. In that sense the view I hold and I think is widely held in longtermist EA is in line with one expert consensus. Indeed, it is striking that this is the one group that actually tries to quantify the aggregate costs of climate change. I also don’t think there are any areas where I disagree with the line taken by the IPCC which is supposed to express the expert consensus on climate. The view that 4ºC is going to kill everyone is one held by some activists and a small number of scientists. Either way we need to explain why we are ignoring all the climate economists and listening to Rockstrom/Lenton instead. On planetary boundaries, as far as I know, I am the only EA to have criticised planetary boundaries, and I don’t dismiss it in passing, but in fact at considerable length. The reviewer I had for that section is a prof and strongly agreed with me.
Differential tech progress has been subject to peer review. The Bostrom articles on it are peer reviewed.
The implications of democratising EA are mindboggling. Suppose that Open Phil’s spending decisions are made democratically by EAs. This would put EAs in charge of ~$10bn. We’d then need to decide who counts as an EA. Because so much money would be on the table, lots of people who we wouldn’t class as EAs would want a say, and it would be undemocratic to exclude them (I assume). So, the ‘EA franchise’ would expand to anyone who wants a say (?) I don’t know where the money would end up after all this, but it’s fair to say that money spent on reducing engineered pandemics, AI and farm animal welfare would fall from the current pitiful sum to close to zero.
You say that worker self-management has been proven to be better for mission-oriented work than top-down rule. This is clearly false. There is a tiny pocket of worker cooperatives (eg in the Basque region) who have been fairly successful. But almost all companies are run oligarchially in a top-down fashion by boards or leadership groups.
Overall, we need to learn hard lessons from the FTX debacle. But thus far, the collapse has mainly been used to argue for things that are completely unrelated to FTX, and mainly to an advance an agenda that has been disfavoured in EA so far, and with good reason. For Cowen, this was neoliberal progress, here it is left wing environmentalism.
I agree with most of your points, but strongly disagree with number 1 and surprised to have heard over time that so many people thought this point was daft.
I don’t disagree that “existential risk” is being employed in a very different sense, so we agree there, in the two instances, but the broader point, which I think is valid, is this:
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
Of course, the people who sank untold hours into existential risk research aren’t to blame, and it isn’t an argument against x-risk/longtermist work, but it does show that EA, as a community missed something dire and critical and importantly something that couldn’t be closer to home for the community. And in my opinion that does shed light on how successful one should expect the longer term endeavours of the community to be.
Scott Alexander, from “If The Media Reported On Other Things Like It Does Effective Altruism”:
The difference in that example is that Scholtz is one person so the analogy doesn’t hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/longterm risks to the movement.
I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example.
At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.
I think that is a charitable interpretation of Cowen’s statement: “Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.”
I think charitably, he isn’t saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?
I think I just don’t agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:
I think that’s plain wrong, and Cowen actually is doing the cheap rhetorical trick of “existential risk in one context equals existential risk in another context”. I like Cowen normally, but IMO Scott’s parody is dead on.
“EA didn’t spot the risk of FTX and so they need better PR/management/whatever” would be fine, but I don’t think he was saying that.
Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational person’s belief in the output of what EA has to offer and also downgrade the trust they are getting it right.
Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like “daft”.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re John’s point 1.
If you’re open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen’s post to explain why I disagreed with his point:
You made a further point, Devon, that I want to respond to as well:
I agree with you here. However, I think the hubris was SBF’s hubris, not EAs’ or longtermists-in-general’s hubris.
I’d even go further to say that it wasn’t the Future Fund team’s hubris.
As John commented below, “EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.”
But that’s a critique of the Future Fund’s (and others’) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I don’t even consider the Future Fund team’s failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fund’s Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EA’s and longtermists’ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as I’m aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I don’t fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoples’ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification:
“Leading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldn’t even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves or those nearby in the present?”
Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund team’s domain expertise on EA money, like something they shouldn’t be able to miss.
Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right?
I shared the modification with an EA that—like me—at first agreed with Cowen. Their response was something like “OK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressed—but I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if it”
However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they can’t work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, assume climatology research benefits greatly from climatologists getting close to hurricanes then we should expect climatologists, as a group, to see more hurricane wounds. In that case we should update, but not as strongly, if climatologists get hurricane wounds.
Ultimately I updated from agree with Cowen to disagree with Cowen after thinking this through. I’d be curious if and where you disagree with this.
Tbh I took the Gell-Mann amnesia interpretation and just concluded that he’s probably being daft more often in areas I don’t know so much about.
This is what Cowen was doing with his original remark.
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of “existential risk” (which I think is a semantics game but others disagree)?
Cowen is saying that he thinks EA is less generally competent because of not seeing the x-risk to the Future Fund.
Again if this was true he would not specifically phrase it as existential risk (unless maybe he was actively trying to mislead)
Fair enough. The implication is there though.
Imagine a forecaster that you haven’t previously heard of told you that there’s a high probability of a new novel pandemic (“pigeon flu”) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.
With a quote like
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus “existential risk to FF” is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they’ve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). It’s embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it’s embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase “existential risk” is just a semantics game tying them together (in the same way that “why would I trust the Bayesian updates in your pigeon flu forecasting when you’ve made elementary math errors in a Bayesian statistics paper” is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you don’t have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread—tabooing “existential risk” and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as “longtermist” didn’t last a year!
Funnily enough, the “pigeon flu” example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
I agree that is the other way out of the puzzle. I wonder whom to even trust if everyone is susceptible to this problem...
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try “taking the hypothesis that EA...” and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
I don’t think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesn’t update my assessment of his views on climate x-risk that much. In contrast, if the climate scientist’s organization built its headquarters in a flood plain and didn’t buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them.
In contrast, EA (and the FF in particular) asserts/ed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under one’s nose would count. While I don’t think “existential risk in one context equals existential risk in another context,” I don’t think the past performance has no bearing on estimates of future performance either.
I think assessing the extent to which the “miss” on FTX should cause a reasonable observer to downgrade EA’s x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasn’t even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hindsight, that is only a minor demerit for me.
To perhaps make it clearer: I think EA is trying to be expert in “existential risks to humanity”, and that really does have almost no overlap with “existential risks to individual firms or organizations”.
Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. You’d need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially hold—the death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesn’t work for me.
More generally, I think it’s probably reasonable to downgrade for missing FTX on “general competence” and “ability to predict and manage risk” as well. I think both of those attributes are correlated with “ability to predict and manage existential risk,” the latter more so than the former. Given that existential-risk expertise is a difficult attribute to measure, it’s reasonable to downgrade when downgrading one’s assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isn’t nearly as heavily loaded on “ability to predict and manage existential risk.” It’s primarily loaded on domain-specific expertise in climate science, and missing FTX wouldn’t make me think materially less of the relevant people as scientists.
To be clear, I’m not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldn’t come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one.
Alternatively, one could have said something like
This, too, would not have been a good argument.
Scott’s analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It’s not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it’s not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
This. We can taboo the words “existential risk” and focus instead on Longtermism. It’s damning that the largest philanthropy focused on Longtermism—the very long term future of humanity—didn’t even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
Perhaps, you know, they were focused more on the long term and not the short term?
You can’t build a temple that lasts 1000 years without first ensuring that it’s on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
Are we trying to build a temple?
My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important.
From that perspective I’m not sure that institutional longevity is really a priority and certainly don’t think that we can infer that longtermists aren’t indeed focused on the long term.
Longtermism is wider than catastrophic risk reduction—e.g. it also encompasses “trajectory changes”. It’s about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn’t done great as a brand so far.)
Hi John,
Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
These things are related, and must be treated as such.
Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.
We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this.
On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.
This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.
We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.
If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g. here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,
In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.
Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.
We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*
We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying.
Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk.
We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change.
Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.
Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.
Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.
Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
Thanks for the detailed response.
I agree that we don’t want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn’t be ‘democratic’ in any meaningful sense.
I don’t have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word ‘existential risk’ doesn’t change that fact.
Since you don’t want diversity essentially along all dimensions, what sort of diversity would you like? You don’t want Trump supporters; do you want more Marxists? You apparently don’t want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase.
The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call ‘TUA’ (which I think is a bit of a silly label—how is it techno-utopian to think we’re all going to be killed by technology?) is actually good, then value alignment on it seems good.
You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It’s somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously. I don’t think it was rational for you to publish the post pseudonymously—I don’t think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I’m not a researcher at a core EA org any more.)
I don’t think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn’t heard of any of the authors of the interest rate piece before.
The main example you give is the reception to the Cremer and Kemp pice, but I haven’t seen any evidence that they did actually get the reception they claimed.
I’m not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states “There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g.”
“In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.” I don’t think this gambit is open to you—your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding.
Of course, there are more things to life and to performance at work than intelligence.
As I mentioned in my first comment, it’s not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don’t help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. “the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.” One of the headings in your article is “We align suspiciously well with the interests of tech billionaires (and ourselves)”. I don’t see how anything you have said here is a good defence against my criticism of that claim.
There’s a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario.
A lot of what you say in your response and in your article seems inconsistent—you make a point of saying that EAs ignore the experts but then dismiss the experts when that happens to be inconsistent with your preferred opinions. Examples:
Defending postcolonialism in global development
Your explanation of why Walmart makes money vs mainstream economics.
Your dismissal of all climate economics and the IPCC
‘Standpoint theory’ vs analytical philosophy
Your dismissal of Bayesianism, which doesn’t seem to be aware of any of the main arguments for Bayesianism.
Your dismissal of the g factor, which doesn’t seem to be aware of the literature in psychology.
The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
Your defence of critical realism
Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. “For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C − 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.”
Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It’s kind of true but a bit misleading to say that this ‘could’ lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like “humanity is a virus” are environmentalists.
Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don’t actually produce any quantitative evidence and it’s completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There’s nothing missing in not modelling the tails here.
ok
What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it’s still not very democratic—as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something?
We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn’t give it money, and I doubt Moskovitz would either. I’m not sure what your proposal is for what we’re supposed to do after this happens.
I actually think corporations are involved in collaborative mission-driven work, and your Mondragon example seems to grant this, though perhaps you are understanding ‘mission’ differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said “Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule”. The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person’s influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions.
“Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.” Evidently from the comments I’m not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment.
If you had spent the post talking about free markets, economic growth and admonishing the woke, I think people would have taken away a different message, but you didn’t do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don’t think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don’t want more right wing people. At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good.
I don’t disagree with what is written here but the tone feels a bit aggressive/adversarial/non-collegial IMHO.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.
‘it is career suicide to criticise diversity’ This seems seriously hyperbolic to me, though I agree that if your down diversity, a non-negligible number of people will disapprove and assume you are right-wing/racist, and that could have career consequences. What’s your best guess as to the proportion of academics who have had their careers seriously damaged for criticizing diversity in the fairly mild way you suggest here (i.e. that as a very generic thing, it does not improve accuracy of group decision-making), relative to those who have made such criticisms?
What percentage of Chinese people have ever been arrested for subversion?
Strong agree with most of these points; the OP seems to not… engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?
EDIT: Oh! It was rockstrom, but the actual quote is: “The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3” from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change. There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you’re interested. The accompanying report is available here.
The quote is:
“Action on climate change is a matter of intra- and intergenerational justice, because climate change impacts already have affected and continue to affect vulnerable people and countries who have least contributed to the problem (Taconet et al., Reference Taconet, Méjean and Guivarch2020). Contribution to climate change is vastly skewed in terms of wealth: the richest 10% of the world population was responsible for 52% of cumulative carbon emissions based on all of the goods and services they consumed through the 1990–2015 period, while the poorest 50% accounted only for 7% (Gore, Reference Gore2020; Oswald et al., Reference Oswald, Owen, Steinberger, Yannick, Owen and Steinberger2020).
A just distribution of the global carbon budget (a conceptual tool used to guide policy) (Matthews et al., Reference Matthews, Tokarska, Nicholls, Rogelj, Canadell, Friedlingstein, Thomas, Frölicher, Forster, Gillett, Ilyina, Jackson, Jones, Koven, Knutti, MacDougall, Meinshausen, Mengis, Séférian and Zickfeld2020) would require the richest 1% to reduce their current emissions by at least a factor of 30, while per capita emissions of the poorest 50% could increase by around three times their current levels on average (UNEP, 2020). Rich countries’ current and promised action does not adequately respond to the climate crisis in general, and, in particular, does not take responsibility for the disparity of emissions and impacts (Zimm & Nakicenovic, Reference Zimm and Nakicenovic2020). For instance, commitments based on Nationally Determined Contributions under the Paris Agreement are insufficient for achieving net-zero reduction targets (United Nations Environment Programme, 2020).”
Whether 1.5 is really in reach anymore is debatable. We’re approaching an El Nino year, it could be a big one, we could see more heat in the atmosphere then, let’s see how close we get to 1.5 GAST then. It won’t be a true GAST value, I suppose, but there’s no way we’re stopping at 1.5 according to Peter Carter:
“This provides more conclusive evidence that limiting to 1.5C is impossible, and only immediate global emissions decline can possibly prevent a warming of 2C by 2050”
and goes on from there.… He prefers CO2e and radiative forcing rather than the carbon budget approach as mitigation assessment measures. It’s worth a viewing as well.
There’s quite a lot to unpack in just these two sources, if you’re interested.
Then there’s Al Gore at the World Economic Forum, who drops some truth bombs: “Are we going to be able to discuss… or putting the oil industry in charge of the COP … we’re not going to disguise it anymore”
OLD:I believe it was Rockstrom, though I’m looking for the reference, who said that citizens of developed countries needed to cut their per capita carbon production by 30X, while in developing countries people could increase it by 3X. That’s not a quote, but I think the numbers are right.
That is a counterpoint to the analysis made by some climate economists.
When I find the reference I’ll share it, because I think he was quoting an analysis from somewhere else, and that could be useful to your analysis given the sources you favor, even if you discount Rockstrom.