Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
These things are related, and must be treated as such.
Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.
We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this.
On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.
This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.
We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.
If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g. here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,
In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.
Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.
We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*
We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying.
Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk.
We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change.
Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.
Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.
Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.
Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
I agree that we don’t want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn’t be ‘democratic’ in any meaningful sense.
I don’t have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word ‘existential risk’ doesn’t change that fact.
Since you don’t want diversity essentially along all dimensions, what sort of diversity would you like? You don’t want Trump supporters; do you want more Marxists? You apparently don’t want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase.
The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call ‘TUA’ (which I think is a bit of a silly label—how is it techno-utopian to think we’re all going to be killed by technology?) is actually good, then value alignment on it seems good.
You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It’s somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously. I don’t think it was rational for you to publish the post pseudonymously—I don’t think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I’m not a researcher at a core EA org any more.)
I don’t think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn’t heard of any of the authors of the interest rate piece before.
The main example you give is the reception to the Cremer and Kemp pice, but I haven’t seen any evidence that they did actually get the reception they claimed.
I’m not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states “There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g.”
“In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.” I don’t think this gambit is open to you—your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding.
Of course, there are more things to life and to performance at work than intelligence.
As I mentioned in my first comment, it’s not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don’t help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. “the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.” One of the headings in your article is “We align suspiciously well with the interests of tech billionaires (and ourselves)”. I don’t see how anything you have said here is a good defence against my criticism of that claim.
There’s a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario.
A lot of what you say in your response and in your article seems inconsistent—you make a point of saying that EAs ignore the experts but then dismiss the experts when that happens to be inconsistent with your preferred opinions. Examples:
Defending postcolonialism in global development
Your explanation of why Walmart makes money vs mainstream economics.
Your dismissal of all climate economics and the IPCC
‘Standpoint theory’ vs analytical philosophy
Your dismissal of Bayesianism, which doesn’t seem to be aware of any of the main arguments for Bayesianism.
Your dismissal of the g factor, which doesn’t seem to be aware of the literature in psychology.
The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
Your defence of critical realism
Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. “For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C − 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.”
Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It’s kind of true but a bit misleading to say that this ‘could’ lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like “humanity is a virus” are environmentalists.
Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don’t actually produce any quantitative evidence and it’s completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There’s nothing missing in not modelling the tails here.
ok
What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it’s still not very democratic—as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something?
We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn’t give it money, and I doubt Moskovitz would either. I’m not sure what your proposal is for what we’re supposed to do after this happens.
I actually think corporations are involved in collaborative mission-driven work, and your Mondragon example seems to grant this, though perhaps you are understanding ‘mission’ differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said “Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule”. The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person’s influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions.
“Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.” Evidently from the comments I’m not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment.
If you had spent the post talking about free markets, economic growth and admonishing the woke, I think people would have taken away a different message, but you didn’t do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don’t think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don’t want more right wing people. At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good.
A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.
Hi John,
Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.
We’re going to respond to your points in the same format that you made them in for ease of comparison.
Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are frequently lacking in left-wing environmentalism.
But why, as you say, was so little attention paid to the risk FTX posed? One of the points we make in the post is that the artificial separation of individual “risks” like this is frequently counterproductive. A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon.
These things are related, and must be treated as such.
Complex patterns of causation like this are just the kind of thing we are advocating for exploring, and something you have confidently dismissed in the recent past, e.g. in the comments on your recent climate post.
We agree that the literature does not all point in one direction; we cited the two sources we cited because they act as recent summaries of the state of the literature as a whole, which includes findings in favour of the positive impacts of e.g. gender and age diversity.
We concede that “essentially all dimensions” was an overstatement: sloppy writing on our part, of which we are sure there is more of in the manifesto, for which we apologise. Thank you for highlighting this.
On another note, equating “criticising diversity” in any form with “career suicide” seems like something of an overstatement.
We agree that there is a balance to be struck, and state this in the post. The issue is that EA uses seemingly neutral terms to hide orthodoxy, is far too far towards one end of the value-alignment spectrum, and actively excludes many valuable people and projects because they do not conform to said orthodoxy.
This is particularly visible in existential risk, where EA almost exclusively funds TUA-aligned projects despite the TUA’s surprisingly poor academic foundations (inappropriate usage of forecasting techniques, implicit commitment to outdated or poorly-supported theoretical frameworks, phil-of-sci considerations about methodological pluralism, etc.) as well as the generally perplexed and unenthusiastic reception it gets in non-EA Existential Risk Studies.
Unfortunately, you are not in the best position to judge whether EA is hostile to criticism. You are a highly orthodoxy-friendly researcher (this is not a criticism of you or your work, by the way!) at a core EA organisation with significant name-recognition and personal influence, and your critiques are naturally going to be more acceptable.
We concede that we may have neglected the role of the seniority of the author in the definition of “deep” critique: it surely plays a significant role, if only due to the hierarchy/deference factors we describe. On examples of chilled works, the very point we are making is the presence of the chilling effect: critiques are not published *because* of the chilling effect, so of course there are few examples to point to.
If you want one example in addition to Democratising Risk, consider our post? The comments also hold several examples of people who did not speak up on particular issues because they feared losing access to EA funding and spaces.
We are not arguing that general intelligence is completely nonexistent, but that the conception commonplace within EA is highly oversimplified: to say that factors in intelligence are correlated does not mean that everything can be boiled down to a single number. There are robust critiques of the g concept that are growing over time (e.g. here) as well as factors that are typically neglected (see the Emotional Intelligence paper we cited). Hence, calling monodimensional intelligence a “central finding of psychological science”, implying it to be some kind of consensus position, is somewhat courageous,
In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.
Our post also mentions other issues with intelligence based-deference: how being smart doesn’t mean that someone should be deferred to on all topics, etc.
We are not arguing that every aspect of EA thought is determined by the preferences of EA donors, so the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.
We concede that we may have neglected cultural factors: in addition to the “hard” money/power factors, there is also the “softer” fact that much of EA culture comes from upper-middle-class Bay Area tech culture, which indirectly causes EA to support things that are popular within that community, which naturally align with the interests of tech companies.*
We are glad that you agree on the spokesperson point: we were very concerned to see e.g. 80kH giving uncritical positive coverage to the crypto industry given the many harms it was already know to be doing prior to the FTX crash, and it is encouraging to hear signals that this sort of thing may be less common going forward.
We agree that getting climate people to think in EA terms can be difficult sometimes, but that is not necessarily a flaw on their part: they may just have different axioms to us. In other cases, we agree that there are serious problems (which we have also struggled with at times) but it is worth reminding ourselves that, as we note in the post, we too can be rather resistant to the inputs of domain-experts. Some of us, in particular, considered leaving EA at one point because it was so (at times, frustratingly) difficult to get other EAs to listen to us when we talked about our own areas of expertise. We’re not perfect either is all we’re saying.
Whilst we agree with you that we shouldn’t only take Rockstrom etc. as “the experts”, and do applaud your analysis that existential catastrophe from climate change is unlikely, we don’t believe your analysis is particularly well-suited to the extremes we would expect for GCR/x-risk scenarios. It is precisely when such models fall down, when civilisational resilience is less than anticipated, when cascades like in RIchards et al. 2021 occur etc., that the catastrophes we are worried about are most likely to happen. X-risk studies relatively low probability unprecedented scenarios that are captured badly by economic models etc. (as with TAI being captured badly by the markets), and we feel your analysis demands certain levels of likelihood and confidence from climate x-risk that is (rightfully, we think) not demanded of e.g. AI or biorisk.
We should expect IPCC consensus not to capture x-risk concerns, because (hopefully) the probabilities are low enough for it not to be something they majorly consider, and, as Climate Endgame points out, there has thus far not been lots of x-risk research on climate change.
Otherwise, there have been notable criticisms of much of the climate economics field, especially its more optimistic end (e.g. this paper), but we concur that it is not something that needs to be debated here.
We did not say that differential technological development had not been subjected to peer review, we said that it has not been subjected to “significant amounts of rigorous peer review and academic discussion”, which is true; apologies if it implied something else. This may not be true forever: we are very excited about the discussion of the current Sandbrink et al 2022 pre-print, for instance. All we were noting here is that important concepts in EA are often in their academic infancy (as you might expect from a movement with new-ish concepts) and thus often haven’t been put to the level of academic scrutiny that is often made out internally.
You assume incorrectly, and apologies if this is also an issue with our communication. We never advocated for opening up the vote to anyone who asked, so fears in this vein are fortunately unsupported. We agree that defining “who gets a vote” is a major crux here, but we suggest that it is a question that we should try to answer rather than using it as justification for dismissing the entire concept of democratisation. In fact, it seems like something that might be suitable for consensus-building tools, e.g. pol.is.
Committing to and fulfilling the Giving Pledge for a certain length of time, working at an EA org, doing community-building work, donating a certain amount/fraction of your income, active participation at an EAG, as well as many others that EAs could think of if we put some serious thought into the problem as a community, are all factors that could be combined to define some sort of boundary.
Given a somewhat costly signal of alignment it becomes unlikely that someone would go “deep cover” in EA in order to have a very small chance of being randomly selected to become one among multiple people in a sortition assembly deliberating on broad strategic questions about the allocation of a certain proportion of one EA-related fund or another.
We are puzzled as to how you took “collaborative, mission-oriented work” to refer exclusively to for-profit corporations. Naturally, e.g. Walmart could never function as a cooperative, because Walmart’s business model relies on its ability to exploit and underpay its workers, which would not be possible if those workers ran the organisation. There are indeed corporations (most famously Mondragon) that function of co-operative lines, as well as the Free Open-Source Software movement, Wikipedia, and many other examples.
Of most obvious relevance, however, is social movements like EA. If one wants a movement to reliably and collaboratively push for certain types of socially beneficial changes in certain ways and avoid becoming a self-perpetuating bureaucracy, it should be run collaboratively by those pushing for those changes in those certain ways and avoid cultivating a managerial elite – cf. the Iron Law of Institutions we mentioned, and more substantively the history of social movements; essentially every Leninist political party springs to mind.
As we say in the post, this was overwhelmingly written before the FTX crash, and the problems we describe existed long before it. The FTX case merely provides an excellent example of some of the things we were concerned about, and for many people shattered the perhaps idealistic view of EA that stopped so many of the problems we describe from being highlighted earlier.
Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.
* We actually touch on it a little: the mention of the Californian Ideology, which we recommend everyone in EA reads.
Thanks for the detailed response.
I agree that we don’t want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn’t be ‘democratic’ in any meaningful sense.
I don’t have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word ‘existential risk’ doesn’t change that fact.
Since you don’t want diversity essentially along all dimensions, what sort of diversity would you like? You don’t want Trump supporters; do you want more Marxists? You apparently don’t want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase.
The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call ‘TUA’ (which I think is a bit of a silly label—how is it techno-utopian to think we’re all going to be killed by technology?) is actually good, then value alignment on it seems good.
You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It’s somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously. I don’t think it was rational for you to publish the post pseudonymously—I don’t think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I’m not a researcher at a core EA org any more.)
I don’t think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn’t heard of any of the authors of the interest rate piece before.
The main example you give is the reception to the Cremer and Kemp pice, but I haven’t seen any evidence that they did actually get the reception they claimed.
I’m not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states “There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g.”
“In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise.” I don’t think this gambit is open to you—your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding.
Of course, there are more things to life and to performance at work than intelligence.
As I mentioned in my first comment, it’s not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don’t help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. “the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal.” One of the headings in your article is “We align suspiciously well with the interests of tech billionaires (and ourselves)”. I don’t see how anything you have said here is a good defence against my criticism of that claim.
There’s a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario.
A lot of what you say in your response and in your article seems inconsistent—you make a point of saying that EAs ignore the experts but then dismiss the experts when that happens to be inconsistent with your preferred opinions. Examples:
Defending postcolonialism in global development
Your explanation of why Walmart makes money vs mainstream economics.
Your dismissal of all climate economics and the IPCC
‘Standpoint theory’ vs analytical philosophy
Your dismissal of Bayesianism, which doesn’t seem to be aware of any of the main arguments for Bayesianism.
Your dismissal of the g factor, which doesn’t seem to be aware of the literature in psychology.
The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
Your defence of critical realism
Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. “For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C − 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.”
Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It’s kind of true but a bit misleading to say that this ‘could’ lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like “humanity is a virus” are environmentalists.
Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don’t actually produce any quantitative evidence and it’s completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There’s nothing missing in not modelling the tails here.
ok
What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it’s still not very democratic—as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something?
We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn’t give it money, and I doubt Moskovitz would either. I’m not sure what your proposal is for what we’re supposed to do after this happens.
I actually think corporations are involved in collaborative mission-driven work, and your Mondragon example seems to grant this, though perhaps you are understanding ‘mission’ differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said “Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule”. The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person’s influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions.
“Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it.” Evidently from the comments I’m not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment.
If you had spent the post talking about free markets, economic growth and admonishing the woke, I think people would have taken away a different message, but you didn’t do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don’t think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don’t want more right wing people. At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good.
I don’t disagree with what is written here but the tone feels a bit aggressive/adversarial/non-collegial IMHO.
This is not the first time I’ve heard this sentiment and I don’t really understand it. If SBF had planned more carefully, if he’d been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA’s exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don’t see how it would have preserved the amount of x-risk research EA can do. If EA didn’t get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.