Pro-pluralist, pro-bednet, anti-Bay EA. đ¸ 10% Pledger.
JWS đ¸
Iâm not sure to what extent the Situational Awareness Memo or Leopold himself are representatives of âEAâ
In the pro-side:
Leopold thinks AGI is coming soon, will be a big deal, and that solving the alignment problem is one of the worldâs most important priorities
He used to work at GPI & FTX, and formerly identified with EA
He (
probablyalmost certainly) personally knows lots of EA people in the Bay
On the con-side:
EA isnât just AI Safety (yet), so having short timelines/âhigh importance on AI shouldnât be sufficient to make someone an EA?[1]
EA shouldnât also just refer to a specific subset of the Bay Culture (please), or at least we need some more labels to distinguish different parts of it in that case
Many EAs have disagreed with various parts of the memo, e.g. Gideonâs well received post here
Since his EA institutional history he moved to OpenAI (mixed)[2] and now runs an AGI investment firm.
By self-identification, Iâm not sure Iâve seen Leopold identify as an EA at all recently.
This again comes down to the nebulousness of what âbeing an EAâ means.[3] I have no doubts at all that, given what Leopold thinks is the way to have the most impact heâll be very effective at achieving that.
Further, on your point, I think thereâs a reason to suspect that something like situational awareness went viral in a way that, say, Rethink Priorities Moral Weight project didnâtâthe promise many people see in powerful AI is power itself, and thatâs always going to be interesting for people to follow, so Iâm not sure that situational awareness becoming influential makes it more likely that other âEAâ ideas will
- ^
Plenty of e/âaccs have these two beliefs as well, they just expect alignment by default, for instance
- ^
I view OpenAI as tending implicitly/âexplicitly anti-EA, though I donât think there was an explicit âpurgeâ, I think the culture/âvision of the company was changed such that card-carrying EAs didnât want to work there any more
- ^
The 3 big defintions I have (self-identification, beliefs, actions) could all easily point in different directions for Leopold
I sort-off bounced of this one Richard. Iâm not a professor of moral philosophy, so some of what I say below may seem obviously wrong/âstupid/âincorrectâbut I think that were I a philosophy professor I would be able to shape it into a stronger objection than it might appear on first glance.
Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly âcanât be precisely quantified,â what theyâre effectively doing is refusing to consider that thing at all.
I donât think this would pass an ideological Turing Test. I think what people who make this claim are saying is often that previous attempts to quantify the good precisely have ended up having morally bad consequences. Given this history, perhaps our takeaway shouldnât be âthey werenât precise enough in their quantificationâ and should be more âperhaps precise quantification isnât the right way to go about ethicsâ.
Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with whatâs emotionally appealing at a gut level.
Again, I donât think this is true. Would you say that before the publication of Famine, Affluence, and Morality that all moral philosophy was just âvibes-based analysisâ? I think, instead, all of moral reasoning is in some sense âvibes-basedâ and the quantification of EA is often trying to present arguments for the EA position.
To state it more clearly, what we care about is moral decision-making, not the quantification of moral decisions. And most decisions that have been made or have ever been made have been done so without quantification. What matters is the moral decisions we make, and the reasons we have for those decisions/âvalues, not what quantitative value we place on said decisions/âvalues.
the question that properly guides our philanthropic deliberations is not âHow can I be sure to do some good?â but rather, âHow can I (permissibly) do the most (expected) good?â
I guess Iâm starting to bounce of this because I now view this as a big moral commitment which I think goes beyond simple beneficentrism. Another view, for example, would be a contractualism, where what âdoing goodâ means is substantially different from what you describe here, but perhaps thatâs a base metaethical debate.
Itâs very conventional to think, âPrioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.â This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.
I think this is confusing two forms of âextremeâ. Like in one sense the default âanimals have little-to-no moral worthâ view is extreme for setting the moral value of animals so low as to be near zero (and confidently so at that). But I think the âextremeâ in your first sentence refers to âextreme from the point of view of societyâ.
Furthermore, if we argue that quantifying expected value in quantitative models is the right way to do moral reasoning (as opposed to sometimes being a tool), then you donât have to accept the âeven a 1% chance is enoughâ, I could just decline to find a tool that produces such dogmatism at 1% acceptable. You could counter with âyour default/âstatus-quo morality is dogmaticâ, which sure. But it doesnât convince me to accept strong longtermism any more, and Iâve already read a fair bit about it (though I accept probably not as much as you).
While youâre at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible.
One manâs âconventional dogmatismâ could be reframed as âthe accurate observation that people with totalising philosophies promising ultra-high-impact have a very bad track record that have often caused harm and those with similar philosophies ought to be viewed with suspicionâ
Sorry if the above was a bit jumbled. It just seemed this post was very unlike your recent Good Judgement with Numbers post, which I clicked with a lot more. This one seems to be you, instead of rejecting the âAll or Nothingâ Assumption, actually going âall inâ on quantitative reasoning. Perhaps it was the tone with which it was written, but it really didnât seem to actually engage with why people have an aversion to over-quantification of moral reasoning.
Thanks for sharing your thoughts. Iâll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.
On Suffering-focused Ethics: To be honest, Iâve never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1]
Such a moral view doesnât really make sense to me, to be honest, so Iâm not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if youâre convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.
On complex cluelessness: On this side, I think youâre right about a lot of things, but thatâs a good thing not a bad one!
I think youâre right about the âtime of perilsâ assumption, but you really should increase your scepticism of any intervention which claims to have âlasting, positive effects over millenniaâ since we canât get the feedback on the millennia long impact of our interventions.
You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and thereâs no use beating yourself up for the default state that everyone is in.
You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesnât have to involve you spending hundreds of hours trying to solve the problems of cluelessness.
Iâd argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since thatâs the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.
- ^
You seem to be asking e.g. âwill lowering existential risk increase the expected amount of future sufferingâ instead of âwill lowering existential risk increase the amount of total preferences satisfied/ânon frustratedâ for example.
- ^
To clairfy, this sentence specifically referred to lexical suffering views, not all forms of SFE that are less strong in their formulation
Edit: Confused about the downvoting hereâis it a âthe Forum doesnât need more of this community dramaâ feeling? I donât really include that much of a personal opinion to disagree with, and I also encourage people to check out Lincolnâs whole response đ¤ˇ
For visibility, on the LW version of this post Lincoln Quirkâmember of the EV UK board made some interesting comments (tagging @lincolnq to avoid sub-posting). I thought itâd be useful to have visibility of them on the Forum. A sentence which jumped out at me was this:
Personally, Iâm still struggling with my own relationship to EA. Iâve been on the EV board for a year+ - an influential role at the most influential meta orgâand I donât understand how to use this role to impact EA.
If one of the EV board members is feeling this way and doesnât know what to do, what hope for rank-and-file EAs? Is anyone driving the bus? Feels like a negative sign for the broader âEA projectâ[1] if this feeling goes right to the top of the institutional EA structure.
That sentence comes near the end of a longer, reflective comment, so I recommend reading the full exchange to take in Lincolnâs whole perspective. (Iâll probably post my thoughts on the actual post sometime next week)
- ^
Which many people reading this might feel positive about
- ^
A thought about AI x-risk discourse and the debate on how âPascalâs Muggingâ-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.
I recognise a pattern where a sceptic will say âAI x-risk concerns are like Pascalâs wager/âare Pascalian and not validâ and then an x-risk advocate will say âBut the probabilities arenât Pascalian. Theyâre actually fairly largeâ[1], which usually devolves into a âThese percentages come from nowhere!â âBut Hinton/âBengio/âRussell...â âJust useful idiots for regulatory capture...â discourse doom spiral.
I think a fundamental miscommunication here is that, while the sceptic is using/âimplying the term âPascallianâ they arenât concerned[2] with the percentage of risk being incredibly small but high impact, theyâre instead concerned about trying to take actions in the worldâespecially ones involving politics and powerâon the basis of subjective beliefs alone.In the original wager, we donât need to know anything about the evidence record for a certain God existing or not, if we simply Pascalâs framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes up, AIXR sceptics are concerned about changing beliefs/âbehaviour/âenact law based on arguments from reason alone that arenât clearly connected to an empirical track record. Focusing on which subjective credences are proportionate to act upon is not likely to be persuasive compared to providing the empirical goods, as it were.
Something which has come up a few times, and recently a lot in the context of Debate Week (and the reaction to Leifâs post) is things getting downvoted quickly and being removed from the Front Page, which drastically drops the likelihood of engagement.[1]
So a potential suggestion for the Frontpage might be:
Hide the vote score of all new posts if the absolute score of the post is below some threshold (Iâll use 20 as an example)
If a post hits â20, it drops off the front page
After a post hits 20+, itâs karma score is permanently revealed
Galaxy-brain version is that Community/âNon-Community grouping should only take effect once a post hits these thresholds[2]
This will still probably leave us with too many new posts to fit on the front page, so some rules to sort which stay and which get knocked off:
Some consideration to total karma should probably count (how much to weight it is debatable)
Some consideration to how recent the post is should count too (e.g. Iâd probably want to see a new post that got 20+ karma quickly than 100+ karma over weeks)
Some consideration should also go to engagementâsome metric related to either number of votes or comment count would probably indicate which posts are generating community engagement, though this could lead to bikeshedding/âMatthew effect if not implemented correctly. I still think itâs directionally correct though
Of course the userâs own personal weighting of topic importance can probably contribute to this as well
There will always be some trade-offs when designing some ranking on many posts with limited space. But the idea above is that no post should quickly drop off the front page because a few people quickly down-vote it into negative karma.
Maybe some code like this already exists, but this thought popped into my head and I thought it was worth sharing on this post.
- ^
My poor little piece on gradient descent got wiped out by debate week đ rip
- ^
In a couple of places Iâve seen people complain about the use of the Community tag to âhideâ particular discussions/âtopics. Not saying I fully endorse this view.
I think âmeat-eating problemâ > âmeat-eater problemâ came in my comment and associated discussion here, but possibly somewhere else.[1]
- ^
(I still stand by the comment, and I donât think itâs contradictory with my current vote placement on the debate week question)
- ^
On the platonic/âphilosophical side Iâm not sure, I think many EAs werenât really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/âor cohort effects. In my case I feel that the epistemic/âcluelessness challenge to longtermism/âfar future effects is pretty dispositive, but Iâm just one person.
On the vibes side, I think the evidence is pretty damning:The launch of WWOTF was almost perfectly at the worst time possible and the idea seems indelibly linked with SBFâs risky/ânaĂŻve ethics and immoral actions.
Do a Google News or Twitter search for âlongtermismâ in its EA context and itâs ~broadly to universally negative. The Google trends data also points toward the term fading away.
No big EA org or âEA leaderâ however defined is going to bat for longtermism any more in the public sphere. The only people talking about it are the critics. When you get that kind of dynamic, itâs difficult to see how an idea can survive.
Even on the Forum, very little discussion on the Forum seems to be based on âlongtermismâ these days. People either seem to have left the Forum/âEA, or longtermist concerns have been subsumed into AI/âbio risk. Longtermism just seems superfluous to these discussions.
Thatâs just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
For the avoidance of doubt, not gaining knowledge from the Carl Shulman episodes is at least as much my fault as it is Rob and Carlâs![1] I think similar to his appearance on the Dwarkesh Podcast, it was interesting and full of information, but Iâm not sure my mind has found a good way to integrate it into my existing perspective yet. It feels unresolved to me, and something I personally want to explore more, so a version of the post written later in time might include those episodes high up. But writing this post from where I am now, I at least wanted to own my perspective/âbias leaning against the AI episodes rather than leave it implicit in the episode selection. But yeah, it was very much my list, and therefore inherits all of my assumptions and flaws.
I do think working in AI/âML means that the relative gain of knowledge may still be lower in this case compared to learning about the abolition of slavery (Brown #145) or the details of fighting Malaria (Tibenderana #129), so I think thatâs a bit more arguable, but probably an unimportant distinction.
- ^
(Iâm pretty sure I didnât listen to part 2, and canât remember how much I listened to of part 1 over reading some of the transcript on the 80k website, so these episodes may be a victim of the ânot listened to fully yetâ criteria)
- ^
I just want to publicly state that the whole âmeat-eater problemâ framing makes me incredibly uncomfortable
First: why not call it the âmeat-eatingâ problem rather than âmeat-eaterâ problem? Human beliefs and behaviours are changeable and malleable. It is not a guarantee that future moral attitudes are set in stoneâhuman history itself should be proof enough of that. Seeing other human beings as âproblems to be solvedâ is inherently dehumanising.
Second: the call on whether net human wellbeing is negated by net animal wellbeing is highly dependent on both moral weights and overall moral view. It isnât a âsolvedâ problem in moral philosophy. Thereâs also a lot of empirical uncertainty people below have pointed out r.e. saving a life != increasing the population, counterfactual wild animal welfare without humans might be even more negative etc.
Thirdâand most importantlyâthis pattern matches onto very very dangerous beliefs:
Rich people in the Western World saying that poor people in Developing countries do not deserve to live/âexist? bad bad bad bad bad
Belief that humanity, or a significant amount of it, ought not to exist (or the world would be better off were they to stop existing) danger danger
Like, already in the thread weâve got examples of people considering whether murdering someone who eats meat isnât immoral, whether they ought to Thanos snap all humans out of existence, analogising average unborn children in the developing world to baby Hitler. my alarm bells are ringing
The dangers of the above grow exponentially if proponents are incredibly morally certain about their beliefs and unlikely to change regardless of evidence shown, believe that they may only have one chance to change things, believe that otherwise unjustifiable actions are justified in their case due to moral urgency.
For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/âworldview is also morally catastrophic.
In general, reflecting on this framing makes it ever more clear to me that Iâm just not a utilitarian or a totalist.
Hey Ben, Iâll remove the tweet images since youâve deleted them. Iâll probably rework the body of the post to reflect that and happy to make any edits/âretractions that you think arenât fair.
I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/âfuture affiliation with EA, I hope youâre doing well.
I appreciate the pushback anormative, but I kinda stand by what I said and donât think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/âbelieve as âtargeting those who wish to leaveâ, or saying people âarenât allowed to criticise usâ in any way.
Maybe your perception of âaccusation of betrayalâ came from the use of âdefectâ which was maybe unfortunate on my part. Iâm trying to use it in a game theory âco-operate/âdefectâ framing. See Matthew Reardon from 80k here.[1]
Iâm not against Ben leaving/âdisassociating (he can do whatever he wants), but I am upset/âconcerned that formerly influential people disassociating from EA leaves the rest of the EA community, who are by and large individuals with a lot less power and influence, to become bycatch.[2]
I think a load-bearing point for me is Benâs position and history in the EA Community.
If an âordinary EAâ were to post something similar, Iâd feel sad but feel no need to criticise them individually (I might gather arguments that present a broader trend and respond to them, as you suggest).
I think there is some common-sense/âvalue-ethics intuition I feel fairly strongly that being a good leader means being a leader when things are tough and not just when times are good.
I think it is fair to characterise Ben as an EA Leader: Ben was a founder of 80,000 Hours, one of the leading sources of Community growth and recruitment. He was likely a part of the shift from the GH&D/âE2G version of 80k to the longtermist/âx-risk focused version, a move that was followed by the rest of EA. He was probably invited to attend (though I canât confirm if he did or not) the EA Leadership/âMeta Co-ordination Forum for multiple years.
If the above is true, then Ben had a much more significant role shaping the EA Community than almost all other members of it.
To the extent Ben thinks that Community is bad/âharmful/âdangerous, the fact that he contributed to it implies some moral responsibility for correcting it. This is what I was trying to get at the with âOmelasâ reference in my original quick take.
As for rebuttals, Ben mentions that he has criticisms of the community but doesnât shared them to an extent they can be rebutted. When he does I look forward to reading and analysing them.[3] Even in the original tweets Ben himself mentions this âlooks a lot like following vibesâ, and heâs right, it does.
- ^
and hereâwhich is how I found out about the original tweets in the first place
- ^
Like Helen Toner might have disassociated/âdistanced herself from the EA Community or EA publicly, but her actions around the OpenAI board standoff have had massively negative consequences for EA imo
- ^
I expect Iâll probably agree with a lot of his criticisms, but disagree that they apply to âthe EA Communityâ as a whole as opposed to specific individuals/âworldviews who identify with EA
<edit: Ben deleted the tweets, so it doesnât feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>
This makes me feel bad, and Iâm going to try and articulate why. (This is mainly about my gut reaction to seeing/âreading these tweets, but Iâll ping @Benjamin_Todd because I think subtweeting/âvagueposting is bad practice and I donât want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.
At a gut-level, this feels like an influential member of the EA community deciding to âdefectâ and leave when the going gets tough. Itâs like deciding to âwalk away from Omelasâ when you had a role in the leadership of the city and benefitted from that position. In contrast, I think the right call is to stay and fight for EA ideas in the âThird Waveâ of EA.
Furthermore,if you do think that EA is about ideas, then I donât think dissassociating from the name of EA without changing your other actions is going to convince anyone about what youâre doing by âgetting distanceâ from EA. Ben is a GWWC pledger, 80k founder, and is focusing his career on (existential?) threats from advanced AI. To do this and then deny being an EA feels disingenuous for ~most plausible definitions of EA to me.
Similar considerations to the above make me very pessimisitic about the âjust take the good parts and people from EA, rebrand the name, disavow the old name, continue operating as per usualâ strategy to work at all
I also think that actions/âstatements like this make it more likely for the whole package of the EA ideas/âcommunity/âbrand/âmovement to slip into a negative spiral which ends up wasting its potential, and given my points above such a collapse would also seriously harm any attempt to get a âtotally not EA yeah weâre definitely not those guysâ movement off the ground.
In general itâs easy pattern for EA criticism to be something like âEA ideas good, EA community badâ but really that just feels like a deepity. For me a better criticism would be explicit about focusing on the funding patterns, or focusing on epistemic responses to criticism, because attacking the EA community at large to me means ~attacking every EA thing in total.
If you think that all of EA is bad because certain actors have had overwhelmingly negative impact, you could just name and shame those actors and not implicitly attack GWWC meetups and the like.
In the case of these tweets, I think a future post from Ben would ideally benefit from being clear about what âthe EA Communityâ actually means, and who it covers.
Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
- ^
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
- ^
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism canât be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesnât have much evidence.
The phenomenon youâre looking at, for instance, is:
âI am trying to get at the phenomenon where people implicitly say/âreason âyes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.â
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really donât think thereâs many people saying âthe best thing to do is donate to X, but I will donate to Yâ. (References please if soâclarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so thereâs no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to âupdate downwardsâ on your views of the genuine interest of othersâas opposed to, say, them being human and fallible despite trying to do the best they canâin the movement feels⌠well Jason used âharshâ, I might use a harsher word to describe this behavior.
- ^
For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict
- ^
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naĂŻve consequentialism we shouldnât always expect the two to go together
- ^
Random Tweet from today: https://ââx.com/ââgarrytan/ââstatus/ââ1820997176136495167
Want to say that I called this ~9 months ago.[1]
I will re-iterate that clashes of ideas/âworldviews[2] are not settled by sitting them out and doing nothing, since they can be waged unilaterally.
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal âeveryone knowsâ thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please donât share any personal information, but I think itâs important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/âhistory of people here.
(Feel free to DM me Ozzie if thatâs easier)
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/âorthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you thatâs all EA is or was, Iâm sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like Iâd want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming itâs the only movement that can or should be a part of it.
tl;drâTo me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/âinstitutions/âideas as âEAâ, when thereâs a lot more to EA than that.
Going to take a stab at this (from my own biased perspective). I think Peter did a very good job, but Sarah was right that I donât think this quite answered your question. I think itâs difficult to think of what counts as âgenerating ideasâ vs rediscovering new ones, many new philosophies/âmovements can generate ideas but they can often be bad ones. And again, EA is a decentral-ish movement and itâs hard to get centralised/âconsensus statements on it.
With enough caveats out of the way, and very much from my biased PoV:
âLongtermismâ is deadâIâm not sure if someone has gone âon recordâ for this, but I think longtermism, especially strong longtermism, as a driving idea for effective altruism is dead. Indeed, to the extent that AI x-risk and Longtermism went hand-in-hand is gone because AI x-risk proponents increasingly view it as a risk that will be played out in years and decades, not centuries and millenia. I donât expect future EA work to be justified under longtermist framing, and I think this reasonably counts as the movement âacknowledging it was wrongâ in some collective-intelligence sort of way.The case for Animal Welfare is growingâIn the last 2 years, I think the intellectual case for Animal Welfare as a leading, and perhaps the EA cause has actually strengthened quite a bit. Rethink published their Moral Weight Sequence which has influenced much subsequent work, see Arielâs excellent pitch for Animal Welfare to dominate nearttermist spending.[1] On radical new ideas to implement, Matthiasâ pitch for screwworm eradication sounded great to me, letâs get it happening! Overall, Animal Welfare is good and EA continues to be directionally ahead on it, and the source of both interesting ideas and funding in this space, in my non-expert opinion.
Thorstadâs Criticism of Astronomical ValueâIâm specifically referring to Davidâs sequence of âExistential Risk Pessimismâ, which I think is broadly part of the EA-idea ecosystem, even if from a critical perspective. The first few pieces, which argues that actually longtermists should have low x-risk probabilities, and vice versa, was really novel and interesting to me (and I wish more people had responded to it). I think that being able to openly criticise x-risk arguments and defer less is hopefully becoming more open, though it may still be a minority view amongst leadership.
Effective Giving is BackâMy sense is that, over the last years, and probably spurred by the FTX collapse and fallout, that Effective Giving is back on the menu. Iâm not particularly sure why it left, or what extent it did,[2] but there are a number of posts (e.g. see here, here, and here) that indicate itâs becoming a lot more of a thing. This is sort of a corrolary of âlongtermism is deadâ, people realised that perhaps earning-to-give, or even just giving, is something which is still valuable that a can be a unifying thing in the EA movement.
There are other things that I could mention but I ran out of time to do so fully. I think there is a sense that there are not as many new, radical ideas as there were in the opening days of EAâbut in some sense thatâs an inevitable part of how social movements and ideas grow and change.
Yeah again I just think this depends on oneâs definition of EA, which is the point I was trying to make above.
Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the âThird Waveâ of EA?
I guess from my PoV what Iâm saying is that Iâm not sure thereâs much âconnective tissueâ between Leopold and myself, so when people use phrases like âlisten to usâ or âHow could we have doneâ I end up thinking âwho the heck is we/âus?â