On the way out of EA.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
On the way out of EA.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
> Says heâs stuck in bed and only going to take a stab
> Posts a thorough, thoughtful, point-by-point response to the OP in good faith
> Just titotal things
- - - - - - - - - - - - - - -
On a serious note, as Richard says it seems like you agree with most of his points, at least on the âEA values/âEA-as-ideasâ set of things. It sounds like atm you think that you canât recommend EA without recommending the speculative AI part of it, which I donât think has to be true.
I continue to appreciate your thoughts and contributions to the Forum and have learned a lot from them, and given the reception you get[1] I think Iâm clearly not alone there :)
Youâre probably by far the highest-upvoted person who considers them EA critical here? (though maybe Habryka would also count)
Apologies for not being clear! Iâll try and be a bit more clear here, but thereâs probably a lot of inferential distance here and weâre covering some quite deep topics:
Supposing we do in fact invent AGI someday, do you think this AGI wonât be able to do science? Or that it will be able to do science, but that wouldnât count as âautomating scienceâ?
Or maybe when you said âwhether âPASTAâ is possible at allâ, you meant âwhether âPASTAâ is possible at all via future LLMsâ?
So on the first section, Iâm going for the latter and taking issue with the term âautomationâ, which I think speaks to mindless, automatic process of achieving some output. But if digital functionalism were true, and we successful made a digital emulation of a human who contributed to scientific research, I wouldnât call that âautomating scienceâ, instead we would have created a being that can do science. That being would be creative, agentic, with the ability to formulate itâs own novel ideas and hypotheses about the world. Itâd be limited by its ability to sample from the world, design experiments, practice good epistemology, wait for physical results etc. etc. It might be the case that some scientific research happens quickly, and then subsequent breakthroughs happen more slowly, etc.
My opinions on this are also highly influenced by the works of Deutsch and Popper too, who essentially argue that the growth of knowledge cannot be predicted, and since science is (in some sense) the stock of human knowledge, and since what cannot be predicted cannot be automated, scientific âautomationâ is in some sense impossible.
Maybe youâre assuming that everyone here has a shared assumption that weâre just talking about LLMs...but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether itâs possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say âWell yeah duh of course thatâs possible,â and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.
Agreed, AI systems are larger than LLMs, and maybe I was being a bit loose with language. On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it. For instance, Davidsonâs Compute-Centric Framework model for OpenPhil states right in that opening page:
In this framework, AGI is developed by improving and scaling up approaches within the current ML paradigm, not by discovering new algorithmic paradigms.
And I get off the bus with this approach immediately because I donât think thatâs plausible.
As I said in my original comment, Iâm working on a full post on the discussion between Chollet and Dwarkesh, which will hopefully make the AGI-sceptical position Iâm coming from a bit more clear. If you end up reading it, Iâd be really interested in your thoughts! :)
Yeah I definitely donât mean âbrains are magicâ, humans are generally intelligent by any meaningful definition of the words, so we have an existence proof there that it is possible to be instantiated in some form.
Iâm more sceptical of thinking science can be âautomatedâ thoughâI think progressing scientific understanding of the world is in many ways quite a creative and open-ended endeavour. It requires forming beliefs about the world, updating them due to evidence, and sometimes making radical new shifts. Itâs essentially the epistemological frame problem, and I think weâre way off a solution there.
I think I have a big similar crux with Aschenbrenner when he says things like âautomating AI research is all it takesââlike I think I disagree with that anyway but automating AI research is really, really hard! It might be âall it takesâ because that problem is already AGI complete!
Thanks JP! No worries about the documentation, I think the main features and what they correspond to on the Forum are fairly easy to interpret.
As for the redirection, I actually queried the bot site directly after seeing it you had one for that purpose, so I havenât actually tested the redirection!
Interesting on the holiday seasonality, itâd be interesting to see and I might look into it. My expectation given the top-level data is that extraneous community events are what bump engagement more, but I could be wrong.
Thanks for sharing toby, I had just finished listening to the podcast and was about to share it here but it turns out you beat me to it! I think Iâll do a post going into the interview (Zvi-style)[1] and bringing up the most interesting points and cruxes, and why the ARC Challenge matters. To quickly give my thoughts on some of the things you bring up:
The ARC Challenge is the best benchmark out their imo, and itâs telling that labs donât release their scores on it. Chollet says in the interview that they test it but because they score badly, the donât release them.
On timelines, Chollet says that OpenAIâs success led the field to 1) stop sharing Frontier research and 2) make the field focus on LLMs alone, thereby setting back timelines to AGI. Iâd also suggest that the âAGI in 2â3 yearsâ claims donât make much sense to me unless you take an LLMs+scaling maximalist perspective.
And to respond to some other comments here:
To huw, I think the AI Safety field is mixed. The original perspective was that ASI would be like an AIXI model, but the success of transformers have changed that. Existing models and dependents could be economically damaging, but taking away the existential risk undermines the astronomical value of AI Safety from an EA perspective.
To OCB, I think we just disagree about how far LLMs are away from this. i think less that ARC is âneatâ and more that it shows a critical failure model in the LLM paradigm. In the interview Chollet argues that the âscaffoldingâ is actually the hard part of reasoning, and I agree with him.
To Mo, I guess Cholletâs perspective would be that you need âopen-endednessâ to be able to automate many/âmost work? A big crux I think here is whether âPASTAâ is possible at all, or at least whether it can be used as a way to bootstrap everything else. Iâm more of the perspective that science is probably the last thing that can possibly be automated, but that might depend on your definition of science.
Iâm quite sceptical of Davidsonâs work, and probably Karnofskyâs, but Iâll need to revisit them in detail to treat them fairly.
The Metaculus AGI markets are, to me, crazy low. In both cases the resolution criteria are some LLM unfriendly, it seems that people are more going off âvibesâ and not reading the fine print. Right now, for instance, any OpenAI model will be easily discovered in a proper imitation game by asking it to do something that violates the terms of service.
Iâll go into more depth on my follow-up post, and Iâll edit this bit of my comment wiht a link once Iâm done.
In style only, I make no claims as to quality
Precommitting to not posting more in this whole thread, but I thought Habrykaâs thoughts deserved a response
IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment
I think this is a fair cop.[1] I appreciate the added context youâve added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/âwork on, so would appreciate your thoughts and takes here even if I suspect Iâll ending up disagreeing with diagnosis/âsolutions.[2]
My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things
I guess that depends on the theory of change for improving things. If itâs using your influence and standing to suggest reforms and hold people accountable, sure. If itâs asking for the community to âdisband and disappearâ, I donât know. Like, I donât know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said âI think lightcone infrastructure in its entirety should shut down and disband, and return all fundsâ and then made decisions about funding and work that aligned with that goal and not yours, how long should they expect to remain part of the core team?
Maybe weâre disagreeing about what we mean by the âEA communityâ implicitly here, and I feel that sometimes the âEA Communityâ is used as a bit of a scapegoat, but when I see takes like this I think âWhy should GWWC shut down and disband because of the actions of SBF/âOpenAI?ââLike I think GWWC and its members definitely count as part of the EA Community, and your opinion seems to be pretty maximal without much room for exceptions.
(Also I think itâs important to note that your own Forum use seems to have contributed to instances of evaporative cooling, so that felt a little off to me.)
I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund
This is true, but LTFF is part of EA Funds, and to me is clearly EA-run/âaffiliated/âassociated. It feels like its odd that youâre a grantmaker who decides where money to the community, from one of its most well-known and accessible funds, and you think that said community should disperse/âdisband/ânot grow/âis net-negative for the world. That just seems rife for weird incentives/âdecisions unless, again, youâre explicitly red-teaming grant proposals and funding decisions. If youâre using it to ârun interferenceâ from the inside, to move funding away from the EA community and its causes, that feels a lot more sketchy to me.
I wish the EA community would disband and disappear and expect it to cause enormous harm in the future
Feels like you should resign from EA Funds grantmaking then
Going to merge replies into this one comment, rather than sending lots and flooding the forum. If Iâve @ you specifically and you donât want to respond in the chain, feel free to DM:
On neglectednessâYep, fair point that our relevant metric here is neglectedness in the world, not in EA. I think there is a point to make here but it was probably the wrong phrasing to use, I should have made it more about âAI Safety being too large a part of EAâ than âLack of neglectedness in EA implies lower ITN returns overallâ
On selection bias/âother takesâThese were only ever meant to be my takes and reflections, so I definitely think theyâre only a very small part of the story. I guess @Stefan_Schubert would be interested to hear about your impression of âlack of leadershipâ and any potential reasons why?
On the Bay/âInsidersâIt does seem like the Bay is convinced AI is the only game in town? (Aschenbrennerâs recent blog seems to validate this). @Phib would be interested to hear you say more on your last paragraph, I donât think I entirely grok it but it sounds very interesting.
On the Object LevelâI think this one for an upcoming sequence. Suffice to say that one can infer from my top level post that I have very different beliefs on this issue than many âinsider EAsâ, and I do work on AI/âML for my day job![1] But I think that while David sketches out a case for overall points, I think those points have been highly underargued and underscrutinised given their application in shaping the EA movement and its funding. So look it for a more specific sequence on the object level[2] maybe-soon-depending-on-writing-speed.
Ah sorry, itâs a bit of linguistic shortcut, Iâll try my best to explain more clearly:
As David says, itâs an idea from Chinese history. Rulers used the concept as a way of legitimising their hold on power, where Tian (Heaven) would bestow a âright to ruleâ on the virtuous ruler. Conversely, rebellions/âusurpations often used the same concept to justify their rebellion, often by claiming the current rulers had lost heavenâs mandate.
Roughly, Iâm using this to analogise to the state of EA, where AI Safety and AI x-risk has become an increasingly large/âwell-funded/âhigh-status[1] part of the movement, especially (at least apparently) amongst EA Leadership and the organisations that control most of the funding/âcommunity decisions.
My impression is that there was a consensus and ideological movement amongst EA leadership (as opposed to an already-held belief where they pulled a bait-and-switch), but many ârank-and-fileâ EAs simply deferred to these people, rather than considering the arguments deeply.
I think that various amounts of scandals/âbad outcomes/âbad decisions/âbad vibes around EA in recent years and at the moment can be linked to this turn towards the overwhelming importance of AI Safety, and as EffectiveAdvocate says below, I would like that part of EA to reduce its relative influence and power on the rest of it, and for rank-and-file EAs to stop deferring on this issue especially, but also in general.
I donât like this term but again, I think people know what I mean when I say this
Reflections đ¤ on EA & EAG following EAG London (2024):
I really liked the location this year. The venue itself was easy to get to on public transport, seemed sleek and clean, and having lots of natural light on the various floors made for a nice environment. We even got some decent sun (for London) on Saturday and Sunday. Thanks to all the organisers and volunteers involved, I know itâs lot of work setting up an event like this us and making it run smoothly.
It was good to meet people in person who I previous had only met or recognised from online interaction. I wonât single out individual 1-on-1s I had, but it was great to be able to put faces to names, and hearing peoples stories and visions in person was hugely inspiring. I talked to people involved in all sorts of cause areas and projects, and that combination of diversity, compassion, and moral seriousness is one of the best things about EA.
Listening to the two speakers from the Hibakusha Project at the closing talk was very moving, and clear case of how knowing something intellectually is not the same thing as hearing personal (and in-person) testimony. I think it wouldâve been one of my conference highlights in the feedback form if we hadnât already been asked to fill it out a few minutes beforehand!
I was going to make a point about a âlack of EA leadershipâ turning up apart from Zach Robinson, but when I double-checked the event attendee list I think I was just wrong on this. Sure, a couple of big names didnât turn up, and it may depend on what list of âEA leadersâ youâre using as a reference, but I want to admit I was directionally wrong here.
I thought Zach gave a good opening speech, but many people noted on the apparent dissonance between saying that CEA wanted to focus on âprinciples-firstâ approach to EA, but that they also expected AI to be their area of most focus/âhighest priority and that they donât expect that to change in the near future.
Finally, while Iâm sure the people I spoke to (and those who wanted to speak to me) is strongly affected by selection-effects, and my own opinions on this are fairly strong, it did feel that there was consensus on there being a lack of trust/âdeference/âshared beliefs from âBay-Area EAâ:[1]
Many people think that working on AI Safety and Governance is important and valuable, but not âoverwhelmingly importantâ or âthe most important thing human has done/âwill ever doâ. This included some fairly well-known names from those who attended, and basically nobody there (as far as I could tell) I interacted with held extremely âdoomerâ beliefs about AI.
There was a lot of uncomfortable feeling at the community-building funding being directed to âlongtermismâ and AI Safety in particular. This is definitely a topic Iâm want to investigate more post-EAG, as Iâm not sure what the truth of the matter is, but Iâd certainly find it problematic if some of the anecdotes I heard were a fair representation of reality.
In any case, I think itâs clear that AI Safety is no longer âneglectedâ within EA, and possibly outside of it.[2] (Retracted this as, while itâs not true, commenters have pointed out that itâs not really the relevant metric to be tracking here)
On a personal level, it felt a bit odd to me that the LessOnline conference was held at exactly the same time as EAG. Feels like it could be a coincidence, but on the other hand this is not a coincidence because nothing is ever a coincidence. It feeds into my impression that the Bay is not very interested in what the rest of EA has to say.
One point which I didnât get any clear answers to was âwhat are the feedback mechanisms in the community to push back on thisâ, and do such feedback mechanisms even exist?
In summary: It feels like, from my perspective, that the Bay Area/âExclusively Longtermist/âAI Safety Maximalist version of EA has âlost of the mandate of heavenâ, but nonetheless at the moment controls a lot of the communityâs money and power. This, again, is a theme I want to explicitly explore in future posts.
I am old (over 30yo) and canât party like the young EAs anymore đ
I think this is a great initiative, and the new website looks great! I do, however, want to raise something (even if Iâm afraid to be seen as âthat guyâ on the Forum):
We are evolving into Consultants for Impact because we believe this new brand will better enable us to achieve our mission. Our new name gives us greater brand independence and control and provides a more professional presentation. It also enhances our capacity to accurately reflect the diverse philosophical frameworks (including, but not exclusively, Effective Altruism) that can benefit our work. We are excited about this transition and believe it will enable us to better support and inspire consultants dedicated to making a significant social impact.
Maybe this is me over-reacting, but it seems to imply âwe used to have EA in our name, but now EA is a toxic brand, so we removed it to avoid negative associationâ. If instead itâs just because the new is more professional /â just a better name then disregard my comment, but itâs not what you wrote in the post.
There are still EA fingerprints, in terms of people, associated orgs, values and even language all over the website, but almost no mention of EA or the phrase âEffective Altruismâ at all.[1] I also think Effective Altruism does/âcan/âshould accomodate a set of âdiverse philosophical frameworksâ and can still call itself EA.
My fear is that people who are still reasonably thought of as EA[2] start to dissassociate from it, leaving only the most hardcore/âweird/âcore people to hold the brand in an evaporative cooling dynamic (there was on discussion on a now-sadly-deleted post about this where someone shared their reasons for leaving EA which to me seemed to fit this dynamic, my response is here) which damages the movement, the organisation, and its aims, and which is mostly unnecessary if driven by roughly the same set of moral values and empirical beliefs.
I wouldnât necessarily call this misleading, but I think the people CFI is going for would probably be smart enough to figure out the connection with some googling
Very much âEA-in-ideasâ not âEA-got-funded-by-OpenPhilâ or âEA-went-to-the-right-partiesâ or âEA-has-lots-of-Forum-karmaâ
Thanks for responding David, and again I think that the survey work youâve done is great :) We have many points of agreement:
Agreed that you basically note my points in the previous works (both in footnotes and in the main text)
Agreed that itâs always a hard tradeoff when compressing detailed research findings into digestible summaries of researchâI know from professional experience how hard that is!
Agreed that there is some structure which your previous factor analysis and general community discussions picked up on, which is worth highlighting and examining
I still think that the terminology is somewhat misguided. Perhaps the key part I disagree is that âReferring to these clusters of causes and ideas in terms of âlongtermismâ and âneartermismâ is established terminologyââeven if it has been established I want to push back and un-establish because I think itâs more unhelpful and even harmful for community discussion and progress. Iâm not sure what terms are better, though some alternatives Iâve seen have been:[1]
Richard Chappellâs âPure suffering reduction vs Reliable global capacity growth vs High-impact long-shotsâ
Laura Duffyâs âEmpirical EA vs Reason-driven EAâ
Ryan Briggsâs âBed-nets vs Light-coneâ
I guess, to state my point as clearly as possible, I donât think the current cluster names âcarve nature at its jointsâ, and that the potential confusion/âambiguity in use could lead to negative perceptions that arenât accurate became entrenched
Though I donât think any of them are perfect distillations
First off, thank you for this research and for sharing it with the community. My overall feeling on this work is extremely positive, and the below is one (maybe my only?) critical nitpick, but I think it is important to voice.
Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist.
Causes classified as neartermist were Mental health, Global poverty and Neartermist other.
Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.
I have to object to this. I donât think longtermism is best understood as a cause, or set of causes, but more as a justification for working on certain causes over others. e.g.:
Working on Nuclear Risk could be seen as near-termist. You can have a person-affecting view of morality and think that, given the track record of nuclear near-miss incidents, that itâs a high priority for the wellbeing of people alive today
We just lived through a global pandemic, there is active concern about H5N1 outbreaks right now, so it doesnât seem obvious to me that many people (EA or not) would count biosecurity in the âlongtermistâ bucket
Similarly, many working on AI risk have short timelines that have only gotten shorter over the past few years.[1]
Climate Change could easily be seen through a âlongtermistâ lens, and is often framed in the media as being an x-risk or affecting the lives of future generations
Approaching Global Poverty from a âgrowth > randomistaâ perspective could easily be justified from a longtermist lens given the effects of compounding returns to economic growth for future generations
EA movement building has often been criticised as focusing on âlongtermistâ causes above others, and that does seem to be where the money is focused
Those concerned about Animal Welfare also have concerns about how humanity might treat animals in the future, and if we might lock-in our poor moral treatment of other beings
(Iâm sure everyone can think of their own counter-examples)
I know the groupings came out of some previous factor analysis you did, and you mention the cause/âjustification difference in the footnotes, and I know that there are differences in community cause prioritisation, but I fear that leading with this categorisation helps to reify and entrench those divisions instead of actually reflecting an underlying reality of the EA movement. I think itâs important enough not to hide the details in footnotes because otherwise people will look at the âlongtermistâ and âneartermistâ labels (like here) and make claims/âinferences that might not correspond to what the numbers are really saying.
I think part of this is downstream of âlongtermismâ being poorly defined/âunderstood (as I said, it is a theory about justifications for causes rather than specific causes themselves), and the âlongtermist turnâ having some negative effects on the community, so isnât a result of your survey. But yeah, I think we need to be really careful about labelling and reifying concepts beyond the empirical warrant we have, because that will in turn have causal effects of the community.
In fact, I wonder if AI was separated ut from the other 3 âlongtermistâ causes, what the others might look like. I think a lot of objections to âlongtermismâ are actually objections to prioritising âAI x-riskâ work.
Hi Remmelt, thanks for your response. Iâm currently travelling so have limited bandwidth to go into a full response, and suspect that itâd make more sense for us to pick this up in DMs again (or at EAG London if youâll be around?)
Some important points I think I should share my perspective on though:
One can think that both Ămile and âFuentesâ behaved badly. Iâm not trying to defend the latter here and they clearly arenât impartial. Iâm less interested in defending Fuentes than trying to point out that Ămile shouldnât be considered a good-faith critic of EA. I think your concerns about Andreas, for example, apply at least tenfold to Ămile.
I donât consider myself an âEA insiderâ, and I donât consider myself having that weight in the Community. I havenât worked at an EA org, I havenât received any money from OpenPhil, Iâve never gone to the Co-ordination Forum etc. I think of A-E, the only one Iâm claiming support for is Dâif Ămile is untrustworthy and often flagrantly wrong/âbiased/âinaccurate then it is a bad sign to not recognise this. The crux then, is whether Ămile is that wrong/âbiased/âinaccurate, which is a matter on which we clearly disagree.[1] One can definitely support other critiques of EA, and it certainly doesnât mean EA is immune to criticism or that it shouldnât be open to hearing them.
Iâll leave it at that for now. Perhaps we can pick this up again in DMs or a Calendly call :) And just want to clarify that I do admire you and your work even if I donât agree with your conclusions. I think youâre a much better EA critic (to the extent you identify as one) than Ămile is.
I really donât want to have to be the person to step up and push against them, but it seems like nobody else is willing to do it
I do not trust your perspective on this saga Remmelt.
For observers, if you want to go down the twitter rabbit hole when this all kicked off, and get the evidence with your own eyes, start here: https://âânitter.poast.org/ââRemmeltE/ââstatus/ââ1627153200930508800#m and if you want read the various substack pieces linked in thread[1]
To me, itâs clear that Ămile is acting the worst of everyone on that thread. And I think you treat Andreas far too harshly as well. You said of him âI think you are being intentionally deceptional here, and not actively truth-seeking.â which, to me, describes Ămileâs behaviour exactly. The fact that, over a year on, you donât seem to recognise this and (if anything) support Ămile more against EA is a bad sign.
We even had a Forum DM discussion about this a while ago, and I provided even more public cases of bad behaviour by Ămile,[2] and you donât seem to have updated much on it.
I applaud your other actions to seek alternative viewpoints on the world on issues that EA cares about (e.g. your collaborations with Forrest Landry and talking to Glen Weyl), but you are so far off the mark with Ămile. I hope you can change your mind on this.
I recommend not doing it, since you all have much more useful things to do with your life. Iâd note that Ămile doesnât really push back on many of the claims in the Fuentes article, and the stuff around Hillary Greaves and âAlex Williamsâ seem far enough to rule someone as a bad-faith actor.
Clarification - âbad behaviourâ as in, Ămile should not be regarded as a trusted source on anything EA, and is acting in bad faith. Not that theyâre doing anything illegal afaik
Going to quickly share that Iâm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while Iâll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think itâd be better for me to move that focus to posts rather than comments for a bit.[1]
If you do want to get in touch about anything, please reach out and Iâll try my very best to respond. Also, if youâre going to be in London for EA Global, then Iâll be around and very happy to catch up :)
Though if itâs a highly engaged/âimportant discussion and thereâs an important viewpoint that I think is missing I may weigh in
Like others, I just want to say Iâm so sorry that you had this experience. It isnât one I recognise from my own journey with EA, but this doesnât invalidate what you went through and Iâm glad youâre moving in a direction that works for you as a person and your values. You are valuable, your life and perspective is valuable, and I wish all you all the best in your future journey.
Indirectly, Iâm going to second @Mjreard belowâI think EA should be seen as beyond a core set of people and institutions. If you are still deeply driven by the ideals EA was inspired by, and are putting that into action outside of âthe movementâ, then to me you are still âEAâ rather than âEA Adjacentâ.[1] EA is a set of ideas, not a set of people or organisations, and I will stand by this point.
Regardless, I wish you all the best, and that if you want to re-engage you do so on your terms.
though ofc you can identify however you like
The Metaculus timeline is already highly unreasonable given the resolution criteria,[1] and even these people think Aschenbrenner is unmoored from reality.
Remind me to write this up soon