âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
I do want to write something along the lines of âAlignment is a Political Philosophy Problemâ
My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical âcorrectâ utility function for a hypothesised superintelligence. Making TAI/âAGI/âASI go well therefore falls in the reference class of âprincipal agent problemâ/ââpublic choice theoryâ/ââsocial contract theoryâ rather than âtimeless decision theory/âcoherent extrapolated volitionâ. The latter 2 are poor answers to an incorrect framing of the question.
Writing that influenced my on this journey:
Tan Zhi Xuanâs whole work, especially Beyond Preferences in AI Alignment
Joe Carlsmithâs Otherness and control in the age of AGI sequence
Matthew Barnettâs various posts on AI recently, especially viewing it as an âinstitutional designâ problem
Nora Belroseâs various posts on scepticism of the case for AI Safety, and even on Safety policy proposals conditional on the first case being right.[1]
The recent Gradual Disempowerment post is something along the lines Iâm thinking of too
I also think this view helps explain the huge range of backlash that AI Safety received over SB1047 and after the awfully botched OpenAI board coup. They were both attempted exercises in political power, and the pushback often came criticising this instead of looking on the âobject levelâ of risk arguments. I increasingly think that this is not an âirrationalâ response but perfectly thing, and âAI Safetyâ needs to pursue more co-operative strategies that credibly signal legitimacy.
I think the downvotes these got are, in retrospect, a poor sign for epistemic health
I donât think anyone wants or needs another âWhy Iâm leaving EAâ post but I suppose if people really wanted to hear it I could write it up. Iâm not sure I have anything new or super insightful to share on the topic.
My previous attempt at predicting what I was going to write got 1â4, which ainât great.
This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.
For the things that I am still thinking of writing Iâll add comments to this post separately to votes and comments can be attributed to each idea individually.
Not to self-promote too much but I see a lot of similarities here with my earlier post, Gradient Descent as an analogy for Doing Good :)
I think they complement each other,[1] with yours emphasising the guidance of the âmoral peakâ, and mine warning against going too straight and ignoring the ground underneath you giving way.
I think there is an underlying point that cluelessness wins over global consequentialism, which is pratically unworkable, and that solid moral heuristics are a more effective way of doing good in a world with complex cluelessness.
Though you flipped the geometry for the more intuitive âreaching a peakâ rather than the ML-traditional âdescending a valleyâ
I also think itâs likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.
I mean the reasoning behind this seems very close to #2 no? The target audience theyâre looking at is probably more interested in neartermism than AI/âlongtermism and they donât think they can get much tractability working with the current EA ecosystem?
The underlying idea here is the Housing Theory of Everything.
A lossy compression of the idea is that if you fix the housing crisis in Western Economies, youâll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact.
A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.
Reminds me of when an article about Rutger popped up on the Forum a while back (my comments here)
I expect SMA people probably think something along the lines of:
EA funding and hard power is fairly centralised. SMA want more control over what they do/âfund/âassociate with and so want to start their own movement.
EA has become AI-pilled and longtermist. Those who disagree need a new movement, and SMA can be that movement.
EAâs brand is terminally tarnished after the FTX collapse. Even though SMA agrees a lot with EA, it needs to market itself as ânot EAâ as much as possible to avoid negative social contagion.
Not making a claim myself about whether and to what extent those claims are true.
Like Ian Turner I ended up disagreeing and not downvoting (I appreciate the work Vasco puts into his posts).
The shortest answer is that I find the âMeat Eater Problemâ repugnant and indicitative of defective moral reasoning that, if applied at scale, would lead to great moral harm.[1]
I donât want to write a super long comment, but my overall feelings on the matter have not changed since this topic came up on the Forum. In fact, Iâd say that one of the leading reasons I consider myself drastically less âEAâ since the last ~6 months have gone by is the seeming embrace of the âMeat-Eater Problemâ inbuilt into both the EA Community and its core ideas, or at least the more ânaĂŻve utilitarianâ end of things. To me, Vascoâs bottom line result isnât an argument that we should prevent children dying of malnutrition or suffering with malaria because of these second-order effects.
Instead, naĂŻve hedonistic utilitarians should be asking themselves: If the rule you followed brought you to this, of what use was the rule?
I also agree factory farming is terrible. I just want to find pareto solutions that reduce needless animal suffering and increase human flourishing.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that itâs time for the long-awaited-by-nobody-return of the đâ¨đ totally-not-serious-worth-no-internet-points-JWS-Forum-Awards đâ¨đ, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RPâs animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/âRethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. đ
Honourable Mentions:
Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a âlegitimacy problemâ.[1] As such, I think Richardâs call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the âvibe shiftâ in Silicon Valley as a chaser.
On Owning Our EA Affiliation by @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of âdo the oppositeâ. Sheâs careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKBđ¸: EA Megaprojects are BACK baby! More seriously, this post people had the most âblow my mindâ effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and Iâm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.
Forum Posters of the Year:
@Vasco Grilođ¸ - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forumâs current postchild of âcalculate all the thingsâ EA. I think this year heâs been an awesome presence on the Forum, and long may it continue.
@Matthew_BarnettâMatthew is somewhat of an engima to me ideologically, there have been many cases where Iâve read a position of his and gone âno that canât be rightâ. Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.
Non-Forum Poasters of the Year:
Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means itâs not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if youâre also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider âAGI Twitterâ, including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and Iâve never (or rarely) seen him get that angry on the platform, which might even deserve another award!
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldnât shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/âcomments/âcontributors were this year!
Yeah I could have worded this better. What I mean to say is that I expect that the tags âCriticism of EAâ and âCommunityâ probably co-occur in posts a lot more than two randomly drawn tags, and probably rank quite high on the pairwise ranking. I donât mean to say that itâs a necessary connection or should always be the case, but it does mean that downweighting Community posts will disproportionately downweight Criticism posts.
If Iâm right, that is! I can probably scrape the data from 23-24 on the Forum to actually answer this question.
Just flagging this for context of readers, I think Habrykaâs position/âreading makes more sense if you view it in the context of an ongoing Cold War between Good Ventures and Lightcone.[1]
Some evidence on the GV side:
The change of funding priorities from Good Ventures seems to include stopping any funding for Lightcone.
Dustin seems to associate the decoupling norms of Lightcone with supporting actors and beliefs that he wants to have nothing to do with.
Dustin and Oli went back and forth in the comments above, some particularly revealing comments from Dustin are here and here, which even if are an attempt at gallows humour, to me also show a real rift.
To Habrykaâs credit, itâs much easier to see what the âLightcone Ecosystemâ thinks of OpenPhil!
He thinks that the actions of GV/âOP were and currently are overall bad for the world.
I think the reason why is mostly given here by MichaelDickens on LW, Habryka adds some more concerns in the comments. My sense is that the LW commentariat is turning increasingly against OP but thatâs just a vibe I have when skim-reading.
Some of it also appears to be for reasons to do with the Lightcone-aversion-to-âdeceptionâ-broadly-defined, which one can see from the Habrykaâs reasoning in this post or replying here to Luke Muehlhauser. This philosophy doesnât seem to explained in one place, Iâve only gleaned what I can from various posts/âcomments so if someone does have a clearer example then feel free to point me in that direction.
This great comment during the Nonlinear saga I think helps make a lot of Lightcone v OP discourse make sense.
I was nervous about writing this because I donât want to start a massive flame war, but I think itâs helpful for the EA Community to be aware that two powerful forces in it/âadjacent to it[2] are essentially in a period of conflict. When you see comments from either side that seem to be more aggressive/âhostile than you otherwise might think warranted, this may make the behaviour make more sense.
Note: I donât personally know any of the people involved, and live half a world away, so expect it to be very inaccurate. Still, this âframeâ has helped me to try to grasp what I see as behaviours and attitudes which otherwise seem hard to explain to me, as an outsider to the âEA/âLW in the Bayâ scene.
To my understanding, the Lightcone position on EA is that it âshould be disavowed and dismantledâ but thereâs no denying the Lightcone is closer to EA than ~most all other organisations in some sense
First, I want to say thanks for this explanation. It was both timely and insightful (I had no idea about the LLM screening, for instance). So wanted to give that a big đ
I think something Jan is pointing to (and correct me if Iâm wrong @Jan_Kulveit) is that because the default Community tag does downweight the visibility and coverage of a post, it could be implicitly used to deter engagement from certain posts. Indeed, my understanding was that this was pretty much exactly the case, and was driven by a desire to reduce Forum engagement on âCommunityâ issues in the wake of FTX. See for example:
âKarma overrates some topics; resulting issues and potential solutionsâ from Lizka and Ben in January 2023
My comment and Lizkaâs response in the comments to that post
The reasoning given in the change announcement post which confirms it was for the âother motivesâ that Jan mentions. Thatâs at least how I read it.
Now, it is also true that I think the Forum was broadly supportive about this at the time. People were exhausted by FTX, and there seemed like there was a new devasting EA scandal every week, and being able to downweight these discussions and focus on ârealâ EA causes was understandably very popular.[1] So it wasnât even necessarily a nefarious change, it was responding to user demand.
Nevertheless I think, especially since criticisms of EA also come with the âCommunityâ tag attached,[2] it has also had the effect of somewhat reducing criticism and community sense-making. In retrospect, I still feel like the damage wrought by FTX hasnât had a full accounting, and the change to down-weight Community posts was trying to solve the âsymptomsâ rather than the underling issues.
Sharing some planned Forum posts Iâm considering, mostly as a commitment device, but welcome thoughts from others:
I plan to add another post in my âEA EDAâ sequence analysing Forum trends in 2024. My pre-registered prediction is that weâll see 2023 patterns continue: declining engagement (though possibly plateauing) and AI Safety further cementing its dominance across posts, karma, and comments.
Iâll also try to do another end-of-year Forum awards post (see here for last yearâs) though with slightly different categories.
Iâm working on an analysis of EAâs post-FTX reputation using both quantitative metrics (Forum engagement, Wikipedia traffic) and qualitative evidence (public statements from influential figures inside and outside EA). The preliminary data suggests more serious reputational damage than the recent Pulse survey. I meaningful (as opposed to methodological or just a mistake) I suspect it might highlight the difference between public and elite perception.
I recently finished reading former US General Stanley McChrystalâs book: Team of Teams. Ostensibly itâs a book about his command of JSOC in the Iraq War, but itâs really about the concept of Auftragstaktik as a method of command, and there was more than one passage which I thought was relevant to Effective Altruism (especially for what âThird Waveâ EA might mean). This one is a stretch though, Iâm not sure how interested the Forum would be for this, or whether it would be the right place to post it.
My focus for 2025 will be to work towards developing my position on AI Safety, and share that through a series of posts AI Safety sequence.[1] The concept of AGI went mainstream in 2024, and it does look like we will see significant technological and social disruption in the coming decades due to AI development. Nevertheless, I find myself increasingly skeptical of traditional narratives and arguments about what Alignment is, the likelihood of risk, and what ought to be done about it. Instead, Iâve come to view âAlignmentâ primarily as a political philosophy rather than a technical computer science. Nevertheless, I could very well be wrong on most-all of these ideas, and getting critical discussion from the community will I think be good both for myself and (I hope) the Forum readership.[2]
As such, Iâm considering doing a deep-dive on the Apollo o1 report given the controversial reception itâs had.[3] I think this is the most unlikely one though, as Iâd want to research it as thoroughly as I could, and time is at a premium since Christmas is around the corner, so this is definitely a âstretch goalâ.
Finally, I donât expect to devote much more time[4] to adding to the âCriticism of EA Criticismâ sequence. I often finish the posts well after the initial discourse has died down, and Iâm not sure what effect they really have.[5] Furthermore, and Iâve started to notice my own views of a variety of topics start to diverge from âEA Orthodoxyâ, so Iâm not really sure Iâd make a good defender. This change may itself warrant a future post, though again Iâm not committing to that yet.
Which I will rename
It possibly may be more helpful for those without technical backgrounds concerned about AI, but Iâm not sure. I also think have a somewhat AGI-sceptical persepctive represented on the Forum might be useful for intellectual diversity purposes but I donât want to claim that. Iâm very uncertain about the future of AI and could easily see myself being convinced to change my mind.
Iâm slightly leaning towards the skeptical interpretation myself, as you might have guessed
if any at all, unless an absolutely egregious but widely-shared example comes up
Does Martin Sandbu read the EA Forum, for instance?
I think this is, to a significant extent, definitionally impossible with longtermist interventions, because the âlong-termâ part excludes having an empirical feedback loop quick enough to update our models of the world.
For example, if Iâm curious about whether malaria net distribution or vitamin A supplementation is more âcost-effectiveâ than another, I can fund interventions and run RCTs, and then model the resulting impact according to some metric like the DALY. This isnât cast-iron secure evidence, but it is at least causally connected to the result I care about.
For interventions that target the long-run future of humanity, this is impossible. We canât run counterfactuals of the future or past, and I at least canât wait 1000 years to see the long-term impact of certain decisions on the civilizational trajectory of the world. Thus, any longtermist intervention cannot really get empirical feedback on the parameters of action, and mostly rely on subjective human judgement about them.
To their credit, the EA Long-Term Future Fund says as much on their own web page:
Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future.
For similar thoughts, see Laura Duffyâs thread on empirical vs reason-driven EA
One potential weakness is that Iâm curious if it promotes the more well-known charities due to the voting system. Iâd assume that these are somewhat inversely correlated with the most neglected charities.
I guess this isnât necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/âneglectedness and importance/âimpact of a cause area and charity. Not saying youâre wrong, but itâs not necessarily a problem.
Furthermore, my anecdotal take from the voting patterns as well as the comments on the discussion thread seem to indicate that neglectedness is often high on the mind of votersâthough I admit that commenters on that thread are a biased sample of all those voting in the election.
It can be a bit underwhelming if an experiment to try to get the crowdâs takes on charities winds up determining to, âjust let the current few experts figure it out.â
Is it underwhelming? I guess if you want the donation election to be about spurring lots of donations to small, spunky EA-startups working in weird-er cause areas, it might be, but I donât think thatâs what I understand the intention of the experiment to be (though I could be wrong).
My take is that the election is an experiment with EA democratisation, where we get to see what the community values when we do a roughly 1-person-1-ballot system instead of those-with-the-moeny decide system which is how things work right now. Those takeaways seem to be:
The broad EA community values Animal Welfare a lot more than the current major funders
The broad EA community sees value in all 3 of the âbig cause areasâ with high-scoring charities in Animal Welfare, AI Safety, and Global Health & Development.
But you havenât provided any data đ¤ˇ
Like you could explain why you think so without de-anonymising yourself, e.g. sammy shouldnât put EA on his CV in US policy because:
Republicans are in control of most positions and they see EA as heavily democrat-coded and arenât willing to consider hiring people with it
The intelligentsia who hire for most US policy positions see EA as cult-like and/âor disgraced after FTX
People wonât understand what EA is on a CV will and discount sammyâs chances compared to them putting down âran a discussion group at universityâ or something like that
You think EA is doomed/âlikely to collapse and sammy should pre-emptively dissasociate their career from it
Like I feel that would be interesting and useful to hear your perspective on, to the extend you can share information about it. Otherwise just jumping in with strong (and controversial?) opinions from anonymous accounts on the forum just serves to pollute the epistemic commons in my opinion.
Right but I donât know who you are, or what your position in the US Policy Sphere is, if you have one at all. I have no way to verify your potential background or the veracity of the information you share, which is one of the major problems with anonymous accounts.
You may be correct (though again that lack of explanation doesnât help give detail or a mechanism why or help sammy that much, as you said it depends on the section) but that isnât really the point, the only data point you provide is âintentionally anonymous person of the EAForum states opinion without supporting explanationsâ which is honestly pretty weak sauce
I donât find comments like these helpful without explanations or evidence, especially from throwaway accounts
Yeah again I just think this depends on oneâs definition of EA, which is the point I was trying to make above.
Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the âThird Waveâ of EA?
I guess from my PoV what Iâm saying is that Iâm not sure thereâs much âconnective tissueâ between Leopold and myself, so when people use phrases like âlisten to usâ or âHow could we have doneâ I end up thinking âwho the heck is we/âus?â
I have some initial data on the popularity and public/âelite perception of EA that I wanted to write into a full post, something along the lines of What is EAâs reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.
My initial data/âinvestigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:
Declining use of the Forum, both in total and amongst influential EAs
EA has a very poor reputation in the public intellectual sphere, especially on Twitter
Many previously highly engaged/âtalented users quietly leaving the movement
An increasing philosophical pushback to the tenets of EA, especially from the New/âAlt/âTech Right, instead of the more common âthe ideas are right, but the movement is wrong in practiceâ[1]
An increasing rift between Rationalism/âLW and EA
Lack of a compelling âfightbackâ from EA leaders or institutions
Doing this research did contribute to me being a lot more gloomy about the state of EA, but I think I do want to write this one up to make the knowledge more public, and allow people to poke flaws in it if possible.
To me this signals more values-based conflict, which makes it harder to find pareto-improving ways to co-operate with other groups