âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
Hey Ben, Iâll remove the tweet images since youâve deleted them. Iâll probably rework the body of the post to reflect that and happy to make any edits/âretractions that you think arenât fair.
I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/âfuture affiliation with EA, I hope youâre doing well.
I appreciate the pushback anormative, but I kinda stand by what I said and donât think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/âbelieve as âtargeting those who wish to leaveâ, or saying people âarenât allowed to criticise usâ in any way.
Maybe your perception of âaccusation of betrayalâ came from the use of âdefectâ which was maybe unfortunate on my part. Iâm trying to use it in a game theory âco-operate/âdefectâ framing. See Matthew Reardon from 80k here.[1]
Iâm not against Ben leaving/âdisassociating (he can do whatever he wants), but I am upset/âconcerned that formerly influential people disassociating from EA leaves the rest of the EA community, who are by and large individuals with a lot less power and influence, to become bycatch.[2]
I think a load-bearing point for me is Benâs position and history in the EA Community.
If an âordinary EAâ were to post something similar, Iâd feel sad but feel no need to criticise them individually (I might gather arguments that present a broader trend and respond to them, as you suggest).
I think there is some common-sense/âvalue-ethics intuition I feel fairly strongly that being a good leader means being a leader when things are tough and not just when times are good.
I think it is fair to characterise Ben as an EA Leader: Ben was a founder of 80,000 Hours, one of the leading sources of Community growth and recruitment. He was likely a part of the shift from the GH&D/âE2G version of 80k to the longtermist/âx-risk focused version, a move that was followed by the rest of EA. He was probably invited to attend (though I canât confirm if he did or not) the EA Leadership/âMeta Co-ordination Forum for multiple years.
If the above is true, then Ben had a much more significant role shaping the EA Community than almost all other members of it.
To the extent Ben thinks that Community is bad/âharmful/âdangerous, the fact that he contributed to it implies some moral responsibility for correcting it. This is what I was trying to get at the with âOmelasâ reference in my original quick take.
As for rebuttals, Ben mentions that he has criticisms of the community but doesnât shared them to an extent they can be rebutted. When he does I look forward to reading and analysing them.[3] Even in the original tweets Ben himself mentions this âlooks a lot like following vibesâ, and heâs right, it does.
and hereâwhich is how I found out about the original tweets in the first place
Like Helen Toner might have disassociated/âdistanced herself from the EA Community or EA publicly, but her actions around the OpenAI board standoff have had massively negative consequences for EA imo
I expect Iâll probably agree with a lot of his criticisms, but disagree that they apply to âthe EA Communityâ as a whole as opposed to specific individuals/âworldviews who identify with EA
<edit: Ben deleted the tweets, so it doesnât feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>
This makes me feel bad, and Iâm going to try and articulate why. (This is mainly about my gut reaction to seeing/âreading these tweets, but Iâll ping @Benjamin_Todd because I think subtweeting/âvagueposting is bad practice and I donât want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.
At a gut-level, this feels like an influential member of the EA community deciding to âdefectâ and leave when the going gets tough. Itâs like deciding to âwalk away from Omelasâ when you had a role in the leadership of the city and benefitted from that position. In contrast, I think the right call is to stay and fight for EA ideas in the âThird Waveâ of EA.
Furthermore,if you do think that EA is about ideas, then I donât think dissassociating from the name of EA without changing your other actions is going to convince anyone about what youâre doing by âgetting distanceâ from EA. Ben is a GWWC pledger, 80k founder, and is focusing his career on (existential?) threats from advanced AI. To do this and then deny being an EA feels disingenuous for ~most plausible definitions of EA to me.
Similar considerations to the above make me very pessimisitic about the âjust take the good parts and people from EA, rebrand the name, disavow the old name, continue operating as per usualâ strategy to work at all
I also think that actions/âstatements like this make it more likely for the whole package of the EA ideas/âcommunity/âbrand/âmovement to slip into a negative spiral which ends up wasting its potential, and given my points above such a collapse would also seriously harm any attempt to get a âtotally not EA yeah weâre definitely not those guysâ movement off the ground.
In general itâs easy pattern for EA criticism to be something like âEA ideas good, EA community badâ but really that just feels like a deepity. For me a better criticism would be explicit about focusing on the funding patterns, or focusing on epistemic responses to criticism, because attacking the EA community at large to me means ~attacking every EA thing in total.
If you think that all of EA is bad because certain actors have had overwhelmingly negative impact, you could just name and shame those actors and not implicitly attack GWWC meetups and the like.
In the case of these tweets, I think a future post from Ben would ideally benefit from being clear about what âthe EA Communityâ actually means, and who it covers.
Thanks Aaron, I think youâre responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
I accept that you didnât intend your framing to be insulting to others, but using âupdating downâ about the âgenuine interestâ of others read as hurtful on my first read. As a (relative to EA) high contextualiser itâs the thing that stood out for me, so Iâm glad you endorse that the âgenuine interestâ part isnât what youâre focusing on, and you could probably reframe your critique without it.
My current understanding of your position is that it is actually: âIâve come to realise over the last year that many people in EA arenât directing their marginal dollars/âresources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective.â[1] To me, this claim is about the object-level disagreement on what EA principles imply.
However, in your response to Jason you say âitâs possible Iâm mistaken over the degree to which direct resources to the place you think needs them mostâ is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But youâve yet to provide any evidence that people arenât doing this, as opposed to just disagreeing about what those places are.[2]
Secondary interpretation is: âEA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a âshut-up-and-calculateâ way. I now believe many fewer actors in the EA space actually do this than I did last yearâ
For example, in Arielâs piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they donât endorse doing âthe most goodâ (I think this is separable from OPâs commitment to worldview diversification).
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism canât be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesnât have much evidence.
The phenomenon youâre looking at, for instance, is:
âI am trying to get at the phenomenon where people implicitly say/âreason âyes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead.â
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really donât think thereâs many people saying âthe best thing to do is donate to X, but I will donate to Yâ. (References please if soâclarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so thereâs no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to âupdate downwardsâ on your views of the genuine interest of othersâas opposed to, say, them being human and fallible despite trying to do the best they canâin the movement feels⌠well Jason used âharshâ, I might use a harsher word to describe this behavior.
For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naĂŻve consequentialism we shouldnât always expect the two to go together
Random Tweet from today: https://ââx.com/ââgarrytan/ââstatus/ââ1820997176136495167
Want to say that I called this ~9 months ago.[1]
I will re-iterate that clashes of ideas/âworldviews[2] are not settled by sitting them out and doing nothing, since they can be waged unilaterally.
Very sorry to hear these reports, and was nodding along as I read the post.
If I can ask, how do they know EA affiliation was the decision? Is this an informal âeveryone knowsâ thing through policy networks in the US? Or direct feedback for the prospective employer than EA is a PR-risk?
Of course, please donât share any personal information, but I think itâs important for those in the community to be as aware as possible of where and why this happens if it is happening because of EA affiliation/âhistory of people here.
(Feel free to DM me Ozzie if thatâs easier)
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/âorthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you thatâs all EA is or was, Iâm sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like Iâd want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming itâs the only movement that can or should be a part of it.
tl;drâTo me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/âinstitutions/âideas as âEAâ, when thereâs a lot more to EA than that.
Going to take a stab at this (from my own biased perspective). I think Peter did a very good job, but Sarah was right that I donât think this quite answered your question. I think itâs difficult to think of what counts as âgenerating ideasâ vs rediscovering new ones, many new philosophies/âmovements can generate ideas but they can often be bad ones. And again, EA is a decentral-ish movement and itâs hard to get centralised/âconsensus statements on it.
With enough caveats out of the way, and very much from my biased PoV:
âLongtermismâ is deadâIâm not sure if someone has gone âon recordâ for this, but I think longtermism, especially strong longtermism, as a driving idea for effective altruism is dead. Indeed, to the extent that AI x-risk and Longtermism went hand-in-hand is gone because AI x-risk proponents increasingly view it as a risk that will be played out in years and decades, not centuries and millenia. I donât expect future EA work to be justified under longtermist framing, and I think this reasonably counts as the movement âacknowledging it was wrongâ in some collective-intelligence sort of way.
The case for Animal Welfare is growingâIn the last 2 years, I think the intellectual case for Animal Welfare as a leading, and perhaps the EA cause has actually strengthened quite a bit. Rethink published their Moral Weight Sequence which has influenced much subsequent work, see Arielâs excellent pitch for Animal Welfare to dominate nearttermist spending.[1] On radical new ideas to implement, Matthiasâ pitch for screwworm eradication sounded great to me, letâs get it happening! Overall, Animal Welfare is good and EA continues to be directionally ahead on it, and the source of both interesting ideas and funding in this space, in my non-expert opinion.
Thorstadâs Criticism of Astronomical ValueâIâm specifically referring to Davidâs sequence of âExistential Risk Pessimismâ, which I think is broadly part of the EA-idea ecosystem, even if from a critical perspective. The first few pieces, which argues that actually longtermists should have low x-risk probabilities, and vice versa, was really novel and interesting to me (and I wish more people had responded to it). I think that being able to openly criticise x-risk arguments and defer less is hopefully becoming more open, though it may still be a minority view amongst leadership.
Effective Giving is BackâMy sense is that, over the last years, and probably spurred by the FTX collapse and fallout, that Effective Giving is back on the menu. Iâm not particularly sure why it left, or what extent it did,[2] but there are a number of posts (e.g. see here, here, and here) that indicate itâs becoming a lot more of a thing. This is sort of a corrolary of âlongtermism is deadâ, people realised that perhaps earning-to-give, or even just giving, is something which is still valuable that a can be a unifying thing in the EA movement.
There are other things that I could mention but I ran out of time to do so fully. I think there is a sense that there are not as many new, radical ideas as there were in the opening days of EAâbut in some sense thatâs an inevitable part of how social movements and ideas grow and change.
Love this question, and think itâs important for us all to consider.
Some considerations for clarification:
why say âconsidered low statusâ instead of âconsidered wrongâ or âconsidered wrong by EA Leadership or whateverâ.
I guess, given EA is somewhat decentralised in terms of claimed ownership, itâs hard to say what âthe movementâ has acknowledged, but maybe substantial or significant minorities of the movement beginning to champion a new cause/âidea would meet the criteria?
The risk, I think, is that this becomes a self-fulfilling prophecy where:
Prominent EA institutions get funded mostly from OP-GCRCB money
Those institutions then prioritise GCRs[1] more
The EA community gets more focused on GCRs by either deferring to these institutions or evaporative cooling by less GCR/âlongtermist EAs
Due to the increased GCR focus of EA, GHW/âAW funders think that funding prominent EA institutions is not cost-effective for their goals
Go-to step 1
Using this as a general term for AI x-risk, longtermism, etc/â
I think this case itâs ok (but happy to change my mind) - afaict he owns the connection now and the two names are a bit like separate personas. Heâs gone on podcasts under his true name, for instance.
a) r.e. Twitter, almost tautologically true Iâm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.
b) I havenât seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but Iâm particularly concerned with the âMany AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AIâ articleâwhich I havenât seen good pushback to. Again, welcome to being wrong on this.
Ok, Iâve seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.
Itâs an unfortunate naming clash, there are different ARC Challenges:
ARC-AGI (Chollet et al) - https://ââgithub.com/ââfchollet/ââARC-AGI
ARC (AI2 Reasoning Challenge) - https://ââallenai.org/ââdata/ââarc
These benchmarks are reporting the second of the two.
LLMs (at least without scaffolding) still do badly on ARC, and Iâd wager Llama 405B still doesnât do well on the ARC-AGI challenge, and itâs telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...
(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of âIN MICEâ)
Folding in Responses here
@thoth hermes (or https://ââx.com/ââthoth_iv if someone can get it to them if youâre Twitter friends then pls go ahead.[1] Iâm responding to this thread hereâI am not saying âthat EA is losing the memetic war because of its high epistemic standardsâ, in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/ânot caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if thereâs a way for you to get in touch directly, Iâd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking âwhy is that? what are we getting wrong?â rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didnât make that clear enough in my OP though.
@Iyngkarran KumarâThanks for sharing your thoughts, but I must say that I disagree with it. I donât think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while itâs good for Eliezer to say what he thinks accurately, the âbomb the datacentersâ[3] piece has probably been harmful for AI Safetyâs cause, and things like it a very liable to turn people away from supporting AI Safety. I also donât think itâs good to say that itâs a claim of âwhat we believeâ, as I donât really agree with Eliezer on much.
(r.e. inside vs outside game, see this post from Holly Elmore)
@anormative/â @David MathersâYeah itâs difficult to manage the exact hypothesis here, especially for falsified preferences. Iâm pretty sure SV is âliberalâ overall, but I wouldnât be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.
- - - - - - - - - - - -
Once again, if you disagree, Iâd love to actually here why. Up/âdown voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but donât want to publicly, then by all means please send a DM :)
I donât have Twitter and think itâd be harmful for my epistemic & mental health if I did get an account and become immersed in âThe Discourseâ
This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;drâAI Safety people, engage with 1a3orn more!)
I know thatâs not what it literally says but itâs what people know it as
Quick[1] thoughts on the Silicon Valley âVibe-Shiftâ
I wanted to get this idea out of my head and into a quick-take. I think thereâs something here, but a lot more to say, and Iâve really havenât done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.
The political outlook in Silicon Valley has changed.
Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/âacc has claimed political victory, and lots of people have noticed the âvibe shiftâ.[2] I think that, rather than this being a change in opinions, itâs more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened.
So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/âe-acc worldview,[3] we know itâs already reached the ears of VP candidate JD Vance.
How did we get here
You could probably write a book on this, so this is a highly opinionated take. But I think this is somewhat, though not exclusively, an own goal of the AI Safety movement.
As ChatGPT starts to bring AI, and AI Safety, into the mainstream discourse, the e/âacc countermovement begins. It positions itself as opposite effective altruism, especially in the wake of SBF.
Guillaume Verdon, under the alias âBeff Jezosâ, realises the memetic weakness of the AI Safety movement and launches a full memetic war against it. Regardless of his rightness or wrongness, you do to some extent got to hand it to him. Heâs like right-wing Ămile Torres, ambitious and relentless and driven by ideological zeal against a hated foe.
Memetic war is total war. This means nuance dies to get it to spread. I donât know if, for example, Marc Andreessen actually thinks antimalarial bednets are a âtriple threatâ of badness, but itâs a war and you donât take prisoners. Does Beff think that people running a uni-group session on Animal Welfare are âbasically terroristsâ, I donât know. But EA is the enemy, and the enemy must be defeated, and the war is total.
The OpenAI board fiasco is, I think, a critical moment here. It doesnât matter what the reasoning weâve come out with at the end of the day was, I think it was perceived as âa doomer coupâ and it did radicalize the valley. In his recent post Richard Ngo called on the AI Safety movement to show more legitimacy and competence. The board fiasco torpedoed my trust in the legitimacy and competence of many senior AI safety people, so god knows how strong the update was for Silicon Valley as a whole.
As some evidence this is known in EA circles, I think this is exactly what Dwarkesh is alluding to when asked âwhat happened to the EA brandâ. For many people in Silicon Valley, I think the answer is that it got thrown in the dustbin of history.
This new movement became increasingly right-wing coded. Partly as a response to the culture wars in America and the increasing vitriol thrown by the left against âtech brosâ, partly as a response to the California Ideology being threatened by any sense of AI oversight or regulation, and partly because EA is the enemy and EA was being increasingly seen by this group as left-wing, woke, or part of the Democratic Party due to the funding patterns of SBF and Moskovitz. I think this has led, fairly predictably, to the right-ward shift in SV and direct political affiliation with a (prospective) second Trump presidency
Across all of this my impression is that, just like with Torres, there was little to no direct pushback. I can understand not wanting to be dragged into a memetic war, or to be involved in the darker parts of Twitter discourse. But the e-acc/âtechnooptimist/âRW-Silicon-Valley movement was being driven by something, and I donât think AI Safety ever really argued against it convincingly, and definitely not in a convincing enough way to âwinâ the memetic war. Like, the a16z cluster literally lied to Congress and to Parliament, but nothing much come of that fact.
I think this is very much linked to playing a strong âinside gameâ to access the halls of power and no âoutside gameâ to gain legitimacy for that use of power. Itâs also I think due to EA not wanting to use social media to make its case, whereas the e-acc cluster was born and lives on social media.
Where are we now?
Iâm not a part of the Bay Area scene and culture,[4] but it seems to me that the AI Safety movement has lost the âmandate of heavenâ to whatever extent it did have it. SB-1047 is a push to change policy that has resulted in backlash, and may result in further polarisation and counter-attempts to fight back in a zero-sum political game. I donât know if itâs constitutional for a Trump/âVance administration to use the Supremacy Clause to void SB-1047 but I donât doubt that they might try. Bidenâs executive order seems certain for the chopping block. I expect a Trump administration to be a lot less sympathetic to the Bay Area/âDC AI Safety movements, and the right-wing part of Silicon Valley will be at the very least energised to fight back harder.
One concerning thing for both Silicon Valley and the AI Safety movement is what happens as a result of the ideological consequences of SV accepting this trend. Already a strong fault-line is the extreme social conservatism and incipient nationalism brought about by this. In the recent a16z podcast, Ben Horowitz literally accuses the Biden administration of breaking the rule of law, and says nothing about Trump literally refusing to concede the 2020 election and declaring that there was electoral fraud. Mike Solana seems to think that all risks of democratic backsliding under a Trump administration were/âare overblown (or at least that people in the Bay agreeing was preference falsification). On the Moments-of-Zen Podcast (which has also hosted Curtis Yarvin twice), Balaji Srinivasan accused the âBlue Tribeâ of ethnically cleansing him out of SF[5] and called on the grey tribe to push all the blues out of SF. e-acc sympathetic people are noting that anti-trans ideas bubbling up in the new movement. You cannot seriously engage with ideas and shape them without those ideas changing you.[6] This right-wing shift will have further consequences, especially under a second Trump presidency.
What next for the AI Safety field?
I think this is a bad sign for the field of AI Safety. Political polarisation has escaped AI for a while. Current polls may lean in support , but polls and political support are fickle, especially in the age of hyper-polarisation.[7] I feel like my fears around the perception of Open Philanthropy are re-occuring here but for the AI Safety movement at large.
I think the consistent defeats to the e-acc school and the fact that the tech sector as a whole seems very much unconvinced by the arguments for AI Safety should at some point lead to a reflection from the movement. Where you stand on this very much depends on your object-level beliefs. While this is a lot of e-acc discourse around transhumanism, replacing humanity, and the AI eschaton, I donât really buy it. I think that they donât think ASI is possible soon, and thus all arguments for AI Safety are bunk. Now, while the tech sector as a whole might not be as hostile, they donât seem at all convinced of the âASI-soonâ idea.
A key point I want to emphasise is that one cannot expect to wield power successfully without also having legitimacy.[8] And to the extent that the AI Safety movementâs strategy is trying to thread this needle it will fail.
Anyway, long ramble over, and given this was basically a one-shot ramble it will have many inaccuracies and flaws. Nevertheless I hope that it can be directionally useful and lead to productive discussion.
lol, lmao
Would be very interested to hear the thoughts of people in the Bay on this
And if invited to be I would almost certainly decline,
He literally used the phrase âethnically cleanseâ. This is extraordinarily dangerous language in a political context.
A good example in fiction is in Warhammer40K, where Horus originally accepts the power of Chaos to fight against Imperial Tyranny, but ends up turning into their slave.
Due to polarisation, views can dramatically shift on even major topics such as the economy and national security (i know these are messy examples!). Current poll leads for AI regulation should not, in any way, be considered secure
I guess you could also have overwhelming might and force, but even that requires legitimacy. Caesar needed to be seen as legitimate by Marc Anthony, Alexander didnât have the legitimacy to get his army to cross the Hyphasis etc.
No really appreciated it your perspective, both on SMA and what we mean when we talk about âEAâ. Definitely has given me some good for thought :)
Feels like youâve slightly misunderstood my point of view here Lorenzo? Maybe thatâs on me for not communicating it clearly enough though.
For what itâs worth, Rutger has been donating 10% to effective charities for a while and has advocated for the GWWC pledge many times...So I donât think heâs against that, and lots of people have taken the 10% pledge specifically because of his advocacy
Thatâs great! Sounds like very âEAâ to me đ¤ˇ
I think this mixes effective altruism ideals/âgoals (which everyone agrees with) with EAâs specific implementation, movement, culture and community.
Iâm not sure everyone does agree really, some people have foundational moral differences. But that aside, I think effective altruism is best understand as a set of ideas/âideals/âgoals. Iâve been arguing that on the Forum for a while and will continue to do so. So I donât think Iâm mixing, I think that the critics are mixing.
This doesnât mean that theyâre not pointing out very real problems with the movement/âcommunity. I still strongly think that the movement has lot of growing pains/âreforms/ârecknonings to go through before we can heal the damage of FTX and onwards.
The âwin by ipponâ was just a jokey reference to Michael Nielsenâs âEA judoâ phrase, not me advocating for soldier over scout mindset.
If we want millions of people to e.g. give effectively, I think we need to have multiple âmovementsâ, âflavoursâ or âinterpretationsâ of EA projects.
I completely agree! Like 100000% agree! But thatâs still âEAâ? I just donât understand trying to draw such a big distinction between SMA and EA in the case where they reference a lot of the same underlying ideas.
So I donât know, feels like weâre violently agreeing here or something? I didnât mean to suggest anything otherwise in my original comment, and I even edited it to make it more clear I was more frustrated at the interviewer than anything Rutger said or did (itâs possible that a lot of the non-quoted phrasing were put in his mouth)
Just a general note, I think adding some framing of the piece, maybe key quotes, and perhaps your own thoughts as well would improve this from a bare link-post? As for the post itself:
It seems Bregman views EA as:
a misguided movement that sought to weaponize the countryâs capitalist engines to protect the planet and the human race
Not really sure how donating ~10% of my income to Global Health and Animal Welfare charities matches that framework tbqh. But yeah âweaponizeâ is highly aggressive language here, if you take it out thereâs not much wrong with it. Maybe Rutger or the interviewer think Capitalism is inherently bad or something?
effective altruism encourages talented, ambitious young people to embrace their inner capitalist, maximize profits, and then donate those profits to accomplish the maximum amount of good.
Are we really doing the earn-to-give thing again here? But like apart from the snark there isnât really an argument here, apart from again implicitly associating capitalism with badness. EA people have also warned about the dangers of maximisation before, so this isnât unknown to the movement.
Bregman saw EAâs demise long before the downfall of the movementâs poster child, Sam Bankman-Fried
Is this implying that EA is dead (news to me) or that is in terminal decline (arguable, but knowledge of the future is difficult etc etc)?
he [Rutger] says the movement [EA] ultimately âalways felt like moral blackmailing to me: youâre immoral if you donât save the proverbial child. Weâre trying to build a movement thatâs grounded not in guilt but enthusiasm, compassion, and problem-solving.
I mean, this doesnât sound like an argument against EA or EA ideas? Itâs perhaps why Rutger felt put off by the movement, but then if you want a movement based on âenthusiasm, compassion, and problem-solvingâ (which are still very EA traits to me, btw), then thatâs because it would be doing more good, rather than a movement wracked by guilt. This just falls victim to classic EA Judo, we win by ippon.
I donât know, maybe Rutger has written up more of his criticism somewhere more thoroughly. Feel like this article is such a weak summary of it though, and just leaves me feeling frustrated. And in a bunch of places, itâs really EA! See:
Using Rob Mather founding AMF as a case study (and who has a better EA story than AMF?)
Pointing towards reducing consumption of animals via less meat-eating
Even explicitly admires EAâs support for ânon-profit charity entrepreneurshipâ
So whereâs the EA hate coming from? I think âEA hateâ is too strong and is mostly/âactually coming from the interviewer, maybe more than Rutger. Seems Rutger is very disillusioned with the state of EA, but many EAs feel that way too! Pinging @Rutger Bregman or anyone else from the EA Netherlands scene for thoughts, comments, and responses.
I just want to publicly state that the whole âmeat-eater problemâ framing makes me incredibly uncomfortable
First: why not call it the âmeat-eatingâ problem rather than âmeat-eaterâ problem? Human beliefs and behaviours are changeable and malleable. It is not a guarantee that future moral attitudes are set in stoneâhuman history itself should be proof enough of that. Seeing other human beings as âproblems to be solvedâ is inherently dehumanising.
Second: the call on whether net human wellbeing is negated by net animal wellbeing is highly dependent on both moral weights and overall moral view. It isnât a âsolvedâ problem in moral philosophy. Thereâs also a lot of empirical uncertainty people below have pointed out r.e. saving a life != increasing the population, counterfactual wild animal welfare without humans might be even more negative etc.
Thirdâand most importantlyâthis pattern matches onto very very dangerous beliefs:
Rich people in the Western World saying that poor people in Developing countries do not deserve to live/âexist? bad bad bad bad bad
Belief that humanity, or a significant amount of it, ought not to exist (or the world would be better off were they to stop existing) danger danger
Like, already in the thread weâve got examples of people considering whether murdering someone who eats meat isnât immoral, whether they ought to Thanos snap all humans out of existence, analogising average unborn children in the developing world to baby Hitler. my alarm bells are ringing
The dangers of the above grow exponentially if proponents are incredibly morally certain about their beliefs and unlikely to change regardless of evidence shown, believe that they may only have one chance to change things, believe that otherwise unjustifiable actions are justified in their case due to moral urgency.
For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/âworldview is also morally catastrophic.
In general, reflecting on this framing makes it ever more clear to me that Iâm just not a utilitarian or a totalist.