My read is it wasn’t the statistics they got hammered on misrepresenting other people’s views of them as endorsements e.g. James Snowden’s views. I will also say the AI side does get this criticism but not on cost-effectiveness but on things like culture war (AI Ethics vs. AI Safety) and dooming about techniques (e.g. working in a big company vs. more EA aligned research group and RLHF discourse).
zchuang
I think this is an actual position. It’s the stochastic parrots argument no? Just a recent post by a cognitive scientist holds this belief.
What’s your rate of success after pushback? Do organisations usually take the more junior person as a speaker?
This strongly resonated with me especially after taking part in XPT. I think I set my expectation really high and got frustrated with the process and now take a relaxed approach to forecasting as a fun thing I do on the side instead of something I actively want to take part of as a community.
I always thought the average model for don’t let AI Safety enter the mainstream was something like (1) you’ll lose credibility and be called a loon and (2) it’ll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is “these people aren’t so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms”.
Great question that prompted a lot of thinking. I think my internal model looks like this:
On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world.
I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review).
Just in my own life I’ve noticed a lot of the “elite” sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency—that people look towards frames of explaining the underlying barrier and motivation to change.
I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.
The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side—I think there’s a good analogy to a SSC post that EAs generally like.
I think benchmarking at reddit moderation is probably the wrong benchmark. Firstly, because the tail risk of unpaid moderation is really bad (e.g. the base rate of moderator driven meltdowns in big subreddits is really high). Secondly, I just don’t think we should underpay people in EA because (a) it creates financial barriers to entry to EA that have long-term effects (e.g. publishing unpaid internships have made the wider labour market for journalism terrible) (b) it’ll create huge amounts of more informal barriers that mean we lean on more informal relationships in EA even more.
Well OpenAI just announced that they’re going to spend 20% of their compute on alignment in the next four years so I think it’s paid off prima facie.
I mean I think it’ll come in waves. As I said in my comment below when FTX Future Fund was up and regrants were abound I had many people around me fake the EA label with hilarious epistemic tripwires abound. Then when FTX collapsed those people were quiet. I think as AI Safety gets more prominent this will happen again in waves. I know a few humanities people pivoting to talking about AI Safety and AI bias people thinking of how to get grant money.
I think the premise that EA will be over because of AI safety community building is confused given this is on the margins and EA movement building literally still exists? There’s literally a companion piece to this by Jessica McCurdy about EA community building on AI Safety specific community building. I also don’t think this piece makes the case for every resource to go to AI Safety community building.
To give more colour to this. During the hype of FTX Future Fund a lot of people called themselves EAs in order to try show value alignment to try get funding and it was painfully awkward and obvious. I think the feeling you’re naming is something like a fair-weather EA effect that dilutes trust within the community and the self-commitment of the label.
I think the upside is that if it is “generational” people grow up and become more agentic as long as we foster the culture. I was remarking to a friend that it’s interesting how people don’t want to get up and learn to code to help with AI Safety (given the rates of AI doomerism) but people were willing to go into quant trading at seemingly higher rates to earn to give in early EA.
I’d love to get his take on coalition differences with the AI Ethics people since he had Alondra Nelson on the Ezra Klein Show and the more mainstream critics of AI Safety.
Moreover, his opinion on the distraction arguments forwarded by AI Ethics people.
I just do not think this is true? UGAP very clearly puts the amount they pay. I think people are upvoting because it reconfirms their priors but I think you just made up “hearing” this. It’s giving Trump’s “many people are saying” and “many such cases” and I wish we wouldn’t do this on the forum.
The implicit assumption here I want to push back on is questioning whether we want the old trust systems that bordered on hero worship back. I do not long for the days where people commented on Will Macaskill’s biceps and were incredibly deferential instead of questioning. I don’t think it’s good for the community health or the impact of EA.
I think it’s also that Conjecture is explicitly short timelines and high p(doom) which means a lot of the criticisms feel like criticisms of rationalists implicitly.
Although I upvoted because I think these critiques are really healthy. The visceral feeling of reading this post was quite different to the first one. This one feels more judgemental on a personal level and gave me information that felt was too privacy violating but I can’t quite articulate why. A lot of it feels like dunks on Conjecture for being young, ambitious, and for failing at times (I will not I know this is not the core of the critique it just FEELS that way).
I just do not feel like the average forum user is in a place where we can adjudicate the personal things regarding the interpersonal issues named in the Conjecture post. I also feel confused about how to judge a VC funded entity given as both the critique and the response notes that these are often informal texts and slack channel messages.
I’d guess the funding mechanism has to be somewhat different given the incentives at play with AI x-risk. Specifically, the Omega critiques do not seem bottlenecked by funding but by time and anonymity in ways that can’t be solved with money.