I think my problem with a lot of this genre of post (neartermism > longtermism) is that the weird EAs donate much more? I would hazard a guess that “Berkeley polycules” have moved much more money for malaria nets while working on AI Safety and/or high income earn to give than most of the “normie EAs” at their office jobs. Also you can just donate and be a neartermist. Lots of people do this—you don’t need to brag about it or care about optics.
zchuang
I think you are overstretching the claims I’m making here. Rather I am saying the heurestic and insinuations are that:
There is a clear split in longtermism and neartermist action for people where most EAs act in both regards and that the Bay Area memeing is unnecessarily divisive and misunderstands people’s motivation.
At no point do I excuse nor do I think most people excuse the sexual abuse that is too common.
I think people need to be more good faith in wondering why people shifted from things like GHW to AI Safety/CB/Governance in their jobs rather than presume it’s from some source of greed (though I have no doubt some did it out of self-interest and cannot speak for them).
I think you’re confused by what “currently lead EA funds” means. It doesn’t mean they’re a funder it means they manage the grant giving process and oversee it. They probably have the most vested interest from a personal point of view to know what their grantees are doing.
I think the commenter viewed you as the bulk of donations in EAIF (a la Moskowitz) and therefore don’t want to see what’s going on? At least that’s how I read the comment.
What sort of institutional safeguards would enable you to share the full extent of what occurred assuming you wanted to share and it would help your healing (beyond what you’ve already written)?
I think I have a confusion here which is what you mean by split because a lot of the things you say are splits (e.g. financial splits) are already done in EA by worldview diversification? Is your argument OpenPhil should break into OpenPhil LTism and OpenPhil GHW (because they already kind of do this with 2 CEOs)?
I’m just left confuse what you mean by equal distribution of money too because a lot of your problems are optics based ones but your solutions are financial splits.
I think the problem here is that it makes a category mistake about how the move to longtermism happened. It wasn’t because of any success or failure metric that moved things but the actual underlying arguments becoming convincing to people. For example, Holden Karnofsky moving from founding Givewell to heading the longtermist side of OpenPhil and focusing on AI.
The people who made neartermist causes successful chose on their own accord to move to the longtermist. They aren’t being coerced away. GHW donations are growing in absolute terms. The weird feeling that there isn’t enough institutional support isn’t a funding problem it’s a weird vibes problem.
Additionally, I don’t even know if people would say longtermism has had a negative impact outside of the doomiest people given it also accelerated alignment organisations (obviously contingent on your optimism on solving alignment). Most people think there’s decent headway insofar as Greg Brockman is talking about alignment seriously and this salience doesn’t spiral into a race dynamic.
Is the idea of an EA split to force Holden back to Givewell? Is it to make it so that Ord and Macaskill go back to GWWC? I just find these posts kind of weird in that they imagine people being pushed into longtermism forgetting that a lot of longtermists were neartermists at one point and made the choice to switch.
Sorry Constellation is not a secret exclusive office (I’ve been invited and I’m incredibly miscellaneous and from New Zealand).
It’s a WeWork and from my understanding it doesn’t accept more applications because it’s full.It’s unlikely Claire gave a grant to Buck since (a) like you said this is a well-known relationship (b) the grant-makers for AI safety are 2 people who are not Claire (Asya Berghal and Luke Muelhausser).
From personal experience it’s actually really easy to talk about diversity in EA? I literally chatted with someone who is now my friend when I said I believe in critical race theory and they responded wokism hurt a lot of their friends and now we’re friends and talk about EA and rationalism stuff all the time. I find most rationalist are so isolated away from blue tribe now days that they treat diversity chat with a lot of morbid curiosity and if you can justify you beliefs well.
Blacklists as I understand them have really high bars and are usually used for assault or when a person is a danger to the community. I also think not inviting someone to a rationality retreat cause you don’t want to hang out with them is fine. I would rather die than do circling with someone I disliked tbh (I still don’t know what circling is but I just assume every rationality retreat is read the CFAR handbook or circling at this point).
This breaks the weird default to everything as good faith but this post reads to me as someone who actually isn’t familiar with EA mechanisms and is more grasping at straws due to their anxieties about the EA space and specifically the Bay Area. Many of the details just hit weird trip wires for me that make me think something’s up (e.g. most of the arguments are poorly done when there are stronger empirics around it).
Edit: I confused constellation and lightcone. I still maintain it’s just an office and people need to chill out about the status anxiety of it.
Yeah so I should have written this clearer. It’s making a few claims:
1. Rationalist retreat type things often require a level of intimacy and trust that means it’s probably ok for them to be more sensitive and have a lower bar for inviting people.
2. Often a lot of young EAs have status anxiety about being invited to things of actual low importance for their impact (e.g. circling). I’m signalling that these are overblown and these social activities are often overestimated in their enjoyment and status.
Disclaimer: I read it a while ago and this is reproduction fast from memory. I also have bad memory of some of the weirder chapters (the Christianity one for instance). These also do not express my personal opinions but rather steelmans and reframings of the book.
I’m from the continental tradition and read a lot of the memeplex (e.g. Donna Harraway, Marcuse, and Freire). I’ll try to make this short summary more EA legible:
1. The object level part of its criticisms draw upon qualitative data from animal activists who take higher risk of failure but more abolitionist approaches. The criticism is then the marginal change pushed by EA makes abolition harder because of the following: (a) lack of coordination and respect for the animal rights activists on the left and specifically the history there, (b) how funding distorts the field and eats up talent and competes against the left (c) how they have to bend themselves to be epistemically scrutable to EA.An EA steelman example of similar points of thinking are EAs who are incredibly anti-working for OpenAI or Deepmind at all because it safety washes and pushes capabilities anyways. The criticism here is the way EA views problems means EA will only go towards solution that are piecemeal rather than transformative. A lot of Marxists felt similarly to welfare reform in that it quelled the political will for “transformative” change to capitalism.
For instance they would say a lot of companies are pursuing RLHF in AI Safety not because it’s the correct way to go but because it’s the easiest low hanging fruit (even if it produces deceptive alignment).
2. Secondarily there is a values based criticism in the animal rights section that EA is too utilitarian which leads to: (a) preferencing charities that lessen animal suffering in narrow senses and (b) when EA does take risks with animal welfare it’s more technocratic and therefore prone to market hype with things like alternative proteins.
A toy example that might help is that something like cage free eggs would violate (a) because it makes the egg company better able to dissolve criticism and (b) is a lack of imagination on the part of ending egg farming overall and sets up a false counterfactual.
3. Thirdly, on global poverty it makes a few claims:
a. The motivation towards quantification is a selfish one citing Herbert Marcuse’s arguments on how neoliberalism has captured institutions. Specifically, the argument criticises Ajeya Cotra’s 2017 talk about effective giving and how it’s about a selfish internal psychological need for quantification and finding comfort in that quantification.
b. The counterfactual of poverty and possible set of actions are much larger because it doesn’t consider the amount of collective action possible. The author sets out types of consciousness raising examples of activism that on first glance is “small” and “intractable” but spark big upheavals (funnily names Greta Thundberg among Black social justice activists which offended my sensibilities).
c. EA runs interference for rich people and provide them cover and potential political action against them (probably the weakest claim of the bunch).
I think a lot of the anti-quantification type arguments that EAs thumb their noses at should be reframed because they are not as weak as they seem nor as uncommon in EA. For instance, the arguments on SPARC and other sorts of community building efforts are successful because they introduce people to transformative ideas. E.g. it’s not a specific activity done but the combination of community and vibes broadly construed that leads to really talented people doing good.
3. Longtermism doesn’t get much of a mention because of publishing time. There’s just a meta-criticism that the switch over from neartermism to longtermism reproduces the same pattern of thinking but also the subtle intellectual. E.g. EAs used to say things were too moonshot with activism and systemic change but now they’re doing longtermism.
I feel like a lot of cruxes of how you receive these criticisms are dependent on what memeplex you buy into. I think if people are pattern-matching to Torres type hit pieces they’re going to be pleasantly surprised. These are real dyed in the wool leftists. It’s not so much weird gotchas that are targeted at getting retweets from twitter beefs and libs it’s for leftist students and seems to be more targeted towards the animal activism side and specific instances of left animal activists and EA clashes at parts.
- 9 Feb 2023 3:15 UTC; 37 points) 's comment on Book Post: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism by (
The Black Vegans one is about different consumer price elasticities between racial groups along various axises.
Queer Eye on the EA Guys is about different measures of animal suffering and coordination between EA and animal activists broadly.
Chapter 11 I also expected to be about degrowthers but it’s about regulatory capture and Jevon’s paradox.
Moreover, I think naming conventions for left-wing texts just have this effect. It depends on how the audience pattern-matches I guess. Also Queer Eye on the EA Guys is just a funny pun. It’s an interesting read for me personally at least. I don’t think it changed any of my opinions or actions.
I think it’s wrong to think of it as criticism in the way EA thinks about criticism which is “tell me X thing I’m doing is wrong so I can fix it” but rather highlight the set of existing fundamental disagreements. I think the book is targeted at an imagined left-wing young person who the authors think would be “tricked” into EA because they misread certain claims that EA puts forward. It’s a form of memeplex competition. Moreover, I do think some of the empirical details talking about the effect ACE has on the wider community can inform a lot of EAs with coordinating with wider ecosystems in cause areas and common communication failure modes.
I dislike this reasoning because it feels deceptive? Like I don’t think we should push global health and well-being jobs to make people more aware of EA and 80k. We should communicate the correct information about them and let people choose while letting them know the full range of trade-offs.
As above, in response to Chris, you kind of town and castle (I’m explicitly trying to move away from motte and bailey because I can never remember which is which) to being less explicit on cause prioritisation means more people working on x-risk causes etc. I don’t think this is something EA should do on principle.
Yep!
Oh sorry I thought both were definitely weworks? I’ll edit that in.
I think this sort of meta-post is directionally correct but doesn’t understand how EA solicits criticism and how the sequence of steps work in soliciting more criticism. The model produced here is more like here are a cluster of cultural norms within EA communication norms that suggests there are a lot of existing difficulty in criticising EA. But I think this abstracts away key parts of how EA interacts with criticism.
On EA soliciting criticism
EA solicits criticism either internally through red teaming (e.g. open discussion norms, disagreeability, high decoupling) or specifically contests (e.g. FTX AI Criticism, OpenPhil AI criticism, EA criticism contest, Givewell criticism contest). These contests and use of payment are going to change the type of criticism given in a few ways and ways that lead to discontent amongst both the critic and the receiver of criticism.
Firstly, EAs see contests as “skin in the game” so to speak with regards to criticisms because you are paying your own money for it. However, this is a very naive understanding of how critics interpret these prizes:
These prizes are often times so large that they are interpreted as distorting to the field overall especially academia. Academia is notorious for grad students being underpaid and fellowships being incredibly competitive. For instance, if you’re an AI researcher who cares a lot about algorithmic bias, seeing potential collaborators move to AI safety and alignment is something that you would see as a distraction and also bitter about. Thus, it is not a criticism contest it’s rather field-building at that point[1].
By virtue of being prizes that have judging panels that are EAs themselves it’s seen as a way to manipulate the overton window of criticism and create a controlled opposition. This is where I think you mistake a terminology problem with a teleological problem in the lines between criticism and more research. Taking for example the top prize that is under your definition straight forward research (I agree). However, the problem is not eliciting research (this probably is similar to how innovation prizes work in function) but that the aim of these prizes get goodharted—the author of the criticism got contacted by Givewell. The competition pool for these just become young EAs trying to applause lights[2] and in turn functionally become high paid work tasks. But notably, only the person who wins gets the prize so in turn a lot of people just got nothing for criticising EA (which makes even a time/cost calculus really bad). But there are a subset of critics who will never be hired by EA organisations nor want EA money.
These problems with the solicitation of criticism means 1-6 pre-conditions are path dependent on the way the engagement happens. It’s not a question of can people criticise in EA reasoning transparency but a question of how EA elicits that criticism out of them. Theoretically an EA could hit 1-6 but the overarching structure of outreach is look at this competition we’re running with huge money attached. Thus, I think EAs focus too much on the interpersonal social model of a lack of social cache or epistemic legibility when those flow downstream from the elephant in the room—money.
On the purpose of criticism
To defensively write and front-load this preemption: I do think EAs need to be less interactive with bad faith criticisms and recognise bad . However, I think it’s useful to understand the set point of distrust and its relationship to the lack of criticism:
EA Judo means the arguments strengthen EA but that’s not always in the best interest of the author. For instance, a lot of older EAs will often recount how their criticism landed them a job or higher esteem in the community. However, some critics do not want their criticisms to strengthen EA. One of the biggest criticisms that EAs have taken on board are about risk-tolerance and systemic change (more hits based giving and more political spending). But this criticism was from the Oxford Left in 2012. The resultant political spending in EA was FTX spending in democratic primaries that theoretically made politics even harder for the left.
In as much as there are crux-y critiques they’re often used internally for jockeying for memeplex ideas and incredible unclear (see: ConcernedEAs). There’s often a slight contradiction here in which the deepest critiques are often ones that don’t make sense from the people they come from (e.g. a lot of the more leftist critiques just make me wonder why the author doesn’t join the DSA yet the author wants to post anonymously to theoretically take an EA job). A lot of these critiques have the undercurrent of EA should do my pet project and imagine EA resources without EA thinking. But also sets up a failure mode of people not understanding “EAs love criticism” is not equivalent to “EAs will change their mind because of the criticism”.
- ^
See this post which is the very example leftists are often scared about.
- ^
I think I’ll get a lot of disagreements here but I want to clarify that EA has its own set of applause lights contextual to the community. For instance, a lot of college students in EA say they have short timelines and then when I ask what their median is it turns out to be the Bioanchor’s median (also people should just say their median this is another gripe). “Short timelines” just has become then a shorthand for “I’m hardcore and dedicated” about AI Safety.
Strong downvoted because it’s obviously a troll question on face but even rhetorically dumb because you asked it of an Israeli math olympiad medalist.
To be clear I strong downvoted your comment for the reasons I posted. I think my main disagreement with Hanania’s post is that it aims to be persuasive rather than explanatory in a way that I think are the hallmarks of politics being a mind-killer. I also downvote tribalist “wokism” posts for the same reason.
I also think anti-wokism is detrimental to object level goals with regards to alignment as we’ve currently seen the discourse be eaten alive in a miasma of “alignment is just about censorship” and “EAs are right wing and appropriated the work of bias people and called it AI Safety”.
I do empathise your comment came from a place of frustration but I don’t think it’s a productive one nonetheless.
I just want to register a meta-level disagreement with this post which is your recommendations seem like really bad epistemics. I don’t think we should just heuristics and information cascade ourselves to death as a community but actually create good gears level understandings of forecasting AI progress.
You cite that AI accelerationist arguments act as soldiers but you literally are deploying arguments as soldiers in this post!
You recommend terrible weird gossiping anti-agency mechanisms instead of pro-agency actions like work on safety, upskill, and field build.
You make a lot of arguments in negation that feel like weird sleight of hands. For instance, you say “We don’t know of any major AI lab that has participated in slowing down AGI development, or publicly expressed interest in it” but OpenAI’s charter literally has the assist clause (regardless of whether or not you believe it’s a promise they will hold it exists).
To be clear I think there are good arguments for short timelines (median 5-10) but you don’t actually make them here[1]. What you do instead is:
Say you can express technical disagreement but not say any empirical examples/obstacles because that’s infohazardous.
A lot of the heuristics based arguments can’t even be verified or prodded because they are “private conversations” which is I guess fine but then what do you want people to do with that?
I think people should think for themselves and engage with the arguments and models people provide for timelines and threat models but this post doesn’t do that. It just directionally vibes a high p(doom) with a short timelines and tells people to panic and gossip.
I’m just going to register a disagreement that I think is going to be a weird intersection of opinions. I despise posting online but here goes. I think this post is full of applause lights and quite frankly white psychodrama.
I’m a queer person of colour and quite left-wing. I really disliked Bostrom’s letter but still lean hard on epistemics being important. I dislike Bostrom’s letter because I think it is an untrue belief and equivocates out of grey tribe laziness. But reading a lot of how white EAs write about being against the letter it sounds like you’re more bothered by issues of social capital and optics for yourself not for any real impact reason.
The reason I believe this is because of two reasons:
1. This post bundles together the Bostrom letter and Wyntham. I personally think Wyntham quite possibly could be negative EV (mostly because I think Oxford real estate is inflated and the castle is aesthetically ugly and not conducive to good work being done). But the wrongness in the Bostrom isn’t that it looks bad. I am bothered by Bostrom holding a wrong belief not a belief that is optically bad.
2. You bundled in AI safety in later discussions about this. But there are lots of neartermist causes that are really weird e.g. shrimp welfare. Your job as a community builder isn’t to feel good and be popular it’s to truth-seek and present morally salient fact. The fact AI safety is the hard one for you speaks to a cohort difference not anything particular about these issues. For instance, in many silicon valley circles AI safety makes EA more popular!
Lastly, I don’t think the social capital people actually complete the argument for full implication of what it means for EA to become optics aware. Do we now go full Shorism and make sure we have white men in leadership positions so we’re socially popular? The discussion devolved to the meta-level of epistemics because the discussion is often low quality and so for it to even continue to do object-level utilitarian calculus to exist because we’re doing group decision-making. It all just seems like a way to descend into respectability politics and ineffectiveness. I want to be part of a movement that does what’s right and true not what’s popular.
On a personal emotional note, I can’t help but wonder how the social capital people would act in previous years with great queer minds. It was just a generation ago that queer people were socially undesirable and hidden away. If your ethics are so sensitive to the feelings of the public I frankly do not trust it. I can’t help but feel a lot of the expressions of fears by mostly white EAs in these social capital posts are status anxieties about their inability to sit around the dinner table and brag about their GiveWell donations.