âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80kâs current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldnât make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80kâs recent pivot might appear less as âevidence-based updatingâ and more as âsurprising and suspicious convergenceâ without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I donât remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I donât particularly remember those arguments/âperspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80kâs broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who donât already find that community credible. Finally, thereâs also significant value in âpressure testingâ your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I donât these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Melanie Mitchell
François Chollet
Kenneth Stanley
Tan Zhi-Xuan
Nora Belrose
Nathan Lambert
Sarah Hooker
Timothy B. Lee
Krishnan Rohit
I donât really get the framing of this question.
I suspect, for any increment of time one could take through EAs existence, then there would have been more âharmâ done in the total rest of world during that time. EA simply isnât big enough to counteract the moral actions of the rest of the world. Wild animals suffer horribly, people die of preventable diseases etc constantly, formal wars and violent struggles occur affecting the lives of millions. There sheer scale of the world outweighs EA many, many times over.
So I suspect youâre making a more direct comparison to Musk/âDOGE/âPEPFAR? But again, I feel like anyone wielding using the awesome executive power of the United States Government should expect to have larger impacts on the world than EA.
I think this is downstream of a lot of confusion about what âEffective Altruismâ really means, and I realise I donât have a good definition any more. In fact, because all of the below can be criticised, it sort of explains why EA gets seemingly infinite criticism from all directions.
Is it explicit self-identification?
Is it explicit membership in a community?
Is it implicit membership in a community?
Is it if you get funded by OpenPhilanthropy?
Is it if you are interested or working in some particular field that is deemed âeffectiveâ?
Is it if you believe in totalising utilitarianism with no limits?
To always justify your actions with quantitative cost-effectiveness analyses where youâre chosen course of actions is the top ranked one?
Is it if you behave a certain way?
Because in many ways I donât count as EA based off the above. I certainly feel less like one than I have in a long time.
For example:
I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
I donât know if this refers to some gestalt âbeliefâ than OP might have, or Dustinâs beliefs, or some kind of âintentional stanceâ regarding OPâs actions. While many EAs shared some beliefs (I guess) thereâs also a whole range of variance within EA itself, and the fundamental issue is that I donât know if thereâs something which can bind it all together.
I guess I think the question should be less âpublic clarification on the relationship between effective altruism and Open Philanthropyâ and more âwhat does âEffective Altruismâ mean in 2025?â
I mean I just donât take Ben to be a reasonable actor regarding his opinions on EA? I doubt youâll see him open up and fully explain a) who the people heâs arguing with are or b) what the explicit change in EA to an âNGO patronage networkâ was with names, details, public evidence of the above, and being willing to change his mind to counter-evidence.
He seems to have been related to Leverage Research, maybe in the original days?[1] And there was a big falling out there, any many people linked to original Leverage hate âEAâ with the fire of a thousand burning suns. Then he linked up with Samo Burja at Bismarck Analysis and also with Palladium, which definitely links him the emerging Thielian tech-right, kinda what I talk about here. (Ozzie also had a good LW comment about this here).
In the original tweet Emmett Shear replies, and then itâs spiralled into loads of fractal discussions, and Iâm still not really clear what Ben means. Maybe you can get more clarification in Twitter DMs rather than having an argument where heâll want to dig into his position publicly?
For the record, a double Leverage & Vassar connection seems pretty disqualifying to meâespecially as iâm very Bay sceptical anyway
I think the theory of change here is that the Abundance Agenda taking off in the US would provide an ideological frame for the Democratic Party to both a) get competitive in the races in needs to win power in the Executive & Legislature and b) have a framing that allows it to pursue good policies when in power, which then unlocks a lot of positive value elsewhere
It also answers the âwhy just the US?â question, though that seemed kind of obvious to me
And as for no cost-effectiveness calculation, it seems that this is the kind of systemic change many people in EA want to see![1] And itâs very hard to get accurate cost-effectiveness-analyses from those. But again, I donât know if thatâs also being too harsh to OP, as many longtermist organisations donât seem to publicly publish their CEAs apart from general reasoning like about âthe future could be very large and very goodâ
Maybe itâs not the exact flavour/âideology they want to see, but it does seem âsystemicâ to me
I think on crux here is around what to do in this face of uncertainty.
You say:
If you put a less unreasonable (from my perspective) number like 50% that weâll have AGI in 30 years, and 50% we wonât, then again I think your vibes and mood are incongruent with that. Like, if I think itâs 50-50 whether there will be a full-blown alien invasion in my lifetime, then I would not describe myself as an âalien invasion risk skepticâ, right?
But I think sceptics like titotal arenât anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they arenât saying âIâve looked at the evidence for AI Progress and am confident at putting it at less than 1%â or whatever, theyâre saying something more like âIâve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/âhype-based to me that Iâm not even giving it table stakesâ
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense youâd want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice thatâs not what people do. I think titotalâs experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the âtransformative near-term llm-based agiâ hypothesis to ânot a reasonable hypothesisâ
To them I feel itâs less someone asking âdonât put the space heater next to the curtains because it might cause a fireâ and more âdonât keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house downâ. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/âconcentration of power than loss-of-control to autonomous systems)
I appreciate the concern that you (and clearly many other Forum users) have, and I do empathise. Still, Iâd like to present a somewhat different perspective to others here.
EA seems far too friendly toward AGI labs and feels completely uncalibrated to the actual existential risk (from an EA perspective)
I think that this implicitly assumes that there is such a things as âan EA perspectiveâ, but I donât think this is a useful abstraction. EA has many different strands, and in general seems a lot more fractured post-FTX.
e.g. You ask âWhy arenât we publicly shaming AI researchers every day?â, but if youâre an AI-sceptical EA working in GH&D that seems entirely useless to your goals! If you take âweâ to mean all EAs already convinced of AI doom then thatâs assuming the conclusion, whether there is a action-significant amount of doom is the question here.
Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGIâyet somehow, EAs rally behind them? Iâm sure almost everyone agrees that Anthropic could contribute to existential risk, so why do they get a pass? Do we think their AGI is less likely to kill everyone than that of other companies?
Anthropicâs alignment strategy, at least publicly facing, is found here.[1] I think Chris Olahâs tweets about it found here include one particularly useful chart:
The probable cruxes here are that âAnthropicâ, or various employees there, are much more optimistic about the difficulty of AI safety than you are. They also likely believe that empirical feedback from actual Frontier models is crucial to a successful science of AI Safety. I think if you hold these two beliefs, then working at Anthropic makes a lot more sense from an AI Safety perspective.
For the record, the more technical work Iâve done, and the more understanding I have about AI systems as they exist today, the more âalignment optimisticâ Iâve got, and I get increasingly skeptical of OG-MIRI-style alignment work, or AI Safety work done in the absence of actual models. We must have contact with reality to make progress,[2] and I think the AI Safety field cannot update on this point strongly enough. Beren Millidge has really influenced my thinking here, and Iâd recommend reading Alignment Needs Empirical Evidence and other blog posts of his to get this perspective (which I suspect many people at anthropic share).
Finally, pushing the frontier of model performance isnât apriori bad, especially if you donât accept MIRI-style arguments. Like, I donât see Sonnet 3.7 as increasing the risk of extinction from AI. In fact, it seems to be both a highly capable model thatâs also very-well aligned according to Anthropicâs HHH criteria. All of my experience using Claude and engaging with the research literature about the model has pushed my distribution of AI Safety towards the âSteam Engineâ level in the chart above, instead of the P vs NP/âImpossible level.
Spending time in the EA community does not calibrate me to the urgency of AI doomerism or the necessary actions that should follow
Finally, on the ânecessary actionsâ point, even if we had a clear empirical understanding of what the current p(doom) is, there are no clear necessary actions. Thereâs still lots of arguments to be had here! See Matthew Barnett has argued in these comments that one can make utilitarian arguments for AI acceleration even in the presence of AI risk,[3] or Nora Belrose arguing that pause-style policies will likely be net-negative. You donât have to agree with either of these, but they do mean that there arenât clear ânecessary actionsâ, at least from my PoV.
Of course, if one has completely lost trust with Anthropic as an actor, then this isnât useful information to you at all. But I think thatâs conceptually a separate problem, because I think have given information to answer the questions you raise, perhaps not to your satisfaction.
Theory will only take you so far
Though this isnât what motivates Anthropicâs thinking afaik
To the extent that word captures the classic âsingle superintelligent modelâ form of risk
I have some initial data on the popularity and public/âelite perception of EA that I wanted to write into a full post, something along the lines of What is EAâs reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.
My initial data/âinvestigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:
Declining use of the Forum, both in total and amongst influential EAs
EA has a very poor reputation in the public intellectual sphere, especially on Twitter
Many previously highly engaged/âtalented users quietly leaving the movement
An increasing philosophical pushback to the tenets of EA, especially from the New/âAlt/âTech Right, instead of the more common âthe ideas are right, but the movement is wrong in practiceâ[1]
An increasing rift between Rationalism/âLW and EA
Lack of a compelling âfightbackâ from EA leaders or institutions
Doing this research did contribute to me being a lot more gloomy about the state of EA, but I think I do want to write this one up to make the knowledge more public, and allow people to poke flaws in it if possible.
To me this signals more values-based conflict, which makes it harder to find pareto-improving ways to co-operate with other groups
I do want to write something along the lines of âAlignment is a Political Philosophy Problemâ
My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical âcorrectâ utility function for a hypothesised superintelligence. Making TAI/âAGI/âASI go well therefore falls in the reference class of âprincipal agent problemâ/ââpublic choice theoryâ/ââsocial contract theoryâ rather than âtimeless decision theory/âcoherent extrapolated volitionâ. The latter 2 are poor answers to an incorrect framing of the question.
Writing that influenced my on this journey:
Tan Zhi Xuanâs whole work, especially Beyond Preferences in AI Alignment
Joe Carlsmithâs Otherness and control in the age of AGI sequence
Matthew Barnettâs various posts on AI recently, especially viewing it as an âinstitutional designâ problem
Nora Belroseâs various posts on scepticism of the case for AI Safety, and even on Safety policy proposals conditional on the first case being right.[1]
The recent Gradual Disempowerment post is something along the lines Iâm thinking of too
I also think this view helps explain the huge range of backlash that AI Safety received over SB1047 and after the awfully botched OpenAI board coup. They were both attempted exercises in political power, and the pushback often came criticising this instead of looking on the âobject levelâ of risk arguments. I increasingly think that this is not an âirrationalâ response but perfectly thing, and âAI Safetyâ needs to pursue more co-operative strategies that credibly signal legitimacy.
I think the downvotes these got are, in retrospect, a poor sign for epistemic health
I donât think anyone wants or needs another âWhy Iâm leaving EAâ post but I suppose if people really wanted to hear it I could write it up. Iâm not sure I have anything new or super insightful to share on the topic.
My previous attempt at predicting what I was going to write got 1â4, which ainât great.
This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.
For the things that I am still thinking of writing Iâll add comments to this post separately to votes and comments can be attributed to each idea individually.
Not to self-promote too much but I see a lot of similarities here with my earlier post, Gradient Descent as an analogy for Doing Good :)
I think they complement each other,[1] with yours emphasising the guidance of the âmoral peakâ, and mine warning against going too straight and ignoring the ground underneath you giving way.
I think there is an underlying point that cluelessness wins over global consequentialism, which is pratically unworkable, and that solid moral heuristics are a more effective way of doing good in a world with complex cluelessness.
Though you flipped the geometry for the more intuitive âreaching a peakâ rather than the ML-traditional âdescending a valleyâ
I also think itâs likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.
I mean the reasoning behind this seems very close to #2 no? The target audience theyâre looking at is probably more interested in neartermism than AI/âlongtermism and they donât think they can get much tractability working with the current EA ecosystem?
The underlying idea here is the Housing Theory of Everything.
A lossy compression of the idea is that if you fix the housing crisis in Western Economies, youâll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact.
A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.
Reminds me of when an article about Rutger popped up on the Forum a while back (my comments here)
I expect SMA people probably think something along the lines of:
EA funding and hard power is fairly centralised. SMA want more control over what they do/âfund/âassociate with and so want to start their own movement.
EA has become AI-pilled and longtermist. Those who disagree need a new movement, and SMA can be that movement.
EAâs brand is terminally tarnished after the FTX collapse. Even though SMA agrees a lot with EA, it needs to market itself as ânot EAâ as much as possible to avoid negative social contagion.
Not making a claim myself about whether and to what extent those claims are true.
Like Ian Turner I ended up disagreeing and not downvoting (I appreciate the work Vasco puts into his posts).
The shortest answer is that I find the âMeat Eater Problemâ repugnant and indicitative of defective moral reasoning that, if applied at scale, would lead to great moral harm.[1]
I donât want to write a super long comment, but my overall feelings on the matter have not changed since this topic came up on the Forum. In fact, Iâd say that one of the leading reasons I consider myself drastically less âEAâ since the last ~6 months have gone by is the seeming embrace of the âMeat-Eater Problemâ inbuilt into both the EA Community and its core ideas, or at least the more ânaĂŻve utilitarianâ end of things. To me, Vascoâs bottom line result isnât an argument that we should prevent children dying of malnutrition or suffering with malaria because of these second-order effects.
Instead, naĂŻve hedonistic utilitarians should be asking themselves: If the rule you followed brought you to this, of what use was the rule?
I also agree factory farming is terrible. I just want to find pareto solutions that reduce needless animal suffering and increase human flourishing.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that itâs time for the long-awaited-by-nobody-return of the đâ¨đ totally-not-serious-worth-no-internet-points-JWS-Forum-Awards đâ¨đ, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RPâs animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/âRethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. đ
Honourable Mentions:
Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a âlegitimacy problemâ.[1] As such, I think Richardâs call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the âvibe shiftâ in Silicon Valley as a chaser.
On Owning Our EA Affiliation by @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of âdo the oppositeâ. Sheâs careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKBđ¸: EA Megaprojects are BACK baby! More seriously, this post people had the most âblow my mindâ effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and Iâm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.
Forum Posters of the Year:
@Vasco Grilođ¸ - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forumâs current postchild of âcalculate all the thingsâ EA. I think this year heâs been an awesome presence on the Forum, and long may it continue.
@Matthew_BarnettâMatthew is somewhat of an engima to me ideologically, there have been many cases where Iâve read a position of his and gone âno that canât be rightâ. Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.
Non-Forum Poasters of the Year:
Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means itâs not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if youâre also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider âAGI Twitterâ, including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and Iâve never (or rarely) seen him get that angry on the platform, which might even deserve another award!
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldnât shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/âcomments/âcontributors were this year!
Yeah I could have worded this better. What I mean to say is that I expect that the tags âCriticism of EAâ and âCommunityâ probably co-occur in posts a lot more than two randomly drawn tags, and probably rank quite high on the pairwise ranking. I donât mean to say that itâs a necessary connection or should always be the case, but it does mean that downweighting Community posts will disproportionately downweight Criticism posts.
If Iâm right, that is! I can probably scrape the data from 23-24 on the Forum to actually answer this question.
Just flagging this for context of readers, I think Habrykaâs position/âreading makes more sense if you view it in the context of an ongoing Cold War between Good Ventures and Lightcone.[1]
Some evidence on the GV side:
The change of funding priorities from Good Ventures seems to include stopping any funding for Lightcone.
Dustin seems to associate the decoupling norms of Lightcone with supporting actors and beliefs that he wants to have nothing to do with.
Dustin and Oli went back and forth in the comments above, some particularly revealing comments from Dustin are here and here, which even if are an attempt at gallows humour, to me also show a real rift.
To Habrykaâs credit, itâs much easier to see what the âLightcone Ecosystemâ thinks of OpenPhil!
He thinks that the actions of GV/âOP were and currently are overall bad for the world.
I think the reason why is mostly given here by MichaelDickens on LW, Habryka adds some more concerns in the comments. My sense is that the LW commentariat is turning increasingly against OP but thatâs just a vibe I have when skim-reading.
Some of it also appears to be for reasons to do with the Lightcone-aversion-to-âdeceptionâ-broadly-defined, which one can see from the Habrykaâs reasoning in this post or replying here to Luke Muehlhauser. This philosophy doesnât seem to explained in one place, Iâve only gleaned what I can from various posts/âcomments so if someone does have a clearer example then feel free to point me in that direction.
This great comment during the Nonlinear saga I think helps make a lot of Lightcone v OP discourse make sense.
I was nervous about writing this because I donât want to start a massive flame war, but I think itâs helpful for the EA Community to be aware that two powerful forces in it/âadjacent to it[2] are essentially in a period of conflict. When you see comments from either side that seem to be more aggressive/âhostile than you otherwise might think warranted, this may make the behaviour make more sense.
Note: I donât personally know any of the people involved, and live half a world away, so expect it to be very inaccurate. Still, this âframeâ has helped me to try to grasp what I see as behaviours and attitudes which otherwise seem hard to explain to me, as an outsider to the âEA/âLW in the Bayâ scene.
To my understanding, the Lightcone position on EA is that it âshould be disavowed and dismantledâ but thereâs no denying the Lightcone is closer to EA than ~most all other organisations in some sense
I responded well to Richardâs call for More Co-operative AI Safety Strategies, and I like the call toward more sociopolitical thinking, since the Alignment problem really is a sociological one at heart (always has been). Things which help the community think along these lines are good imo, and I hope to share some of my own writing on this topic in the future.
Whether or not I agree with Richardâs personal politics or not is kinda beside the point to this as a message. Richardâs allowed to have his own views on things and other people are allowed to criticse this (I think David Mathersâ comment is directionally where I lean too). I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it. I think, while some tensions do exist, it wasnât inevitable that itâd get as bad as it is now, and in the end it was a particularly self-defeating one. Again, by doing the kind of thinking Richard is advocating for (you donât have to co-sign with his solutions, heâs even calling for criticism in the post!), we can hopefully avoid these failures in the future.
On the bounties, the one that really interests me is the OpenAI board one. I feel like Iâve been living in a bizarro-world with EAs/âAI Safety People ever since it happened because it seemed such a collosal failure, either of legitimacy or strategy (most likely both), and itâs a key example of the âun-cooperative strategyâ that Richard is concerned about imo. The combination of extreme action and ~0 justification either externally or internally remains completely bemusing to me and was big wake-up call for my own perception of âAI Safetyâ as a brand. I donât think people can underestimate the second-impact effect this bad on both âAI Safetyâ and EA, coming about a year after FTX.