âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
âEA-Adjacentâ now I guess.
đ¸ 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
Iâm not sure I feel as concerned about this as others. tl;drâThey have different beliefs from Safety-concerned EAs, and their actions are a reflection of those beliefs.
It seems broadly bad that the alumni from a safety-focused AI org
Was Epoch ever a âsafety-focusedâ org? I thought they were trying to understand whatâs happening with AI, not taking a position on Safety per se.
âŚhave left to form a company which accelerates AI timelines
I think Matthew and Tamay think this is positive, since they think AI is positive. As they say, they think explosive growth can be translated into abundance. They donât think that the case for AI risk is strong, or significant, especially given the opportunity cost they see from leaving abundance on the table.
Also important to note is what Epoch boss Jaime says in this very comment thread.
As I learned more and the situation unfolded I have become more skeptical of AI Risk.
The same thing seems to be happening with me, for what itâs worth.
People seem to think that there is an âEA Orthodoxyâ on this stuff, but there either isnât as much as people think, or people who disagree with it are no longer EAs. I really donât think it makes sense to clamp down on âdoing anything to progress AIâ as being a hill for EA to die on.
Note: Iâm writing this for the audience as much as a direct response
The use of Evolution to justify this metaphor is not really justified. I think Quintin Popeâs Evolution provides no evidence for the sharp left turn (which won a prize in an OpenPhil Worldview contest) convincingly argues against it. Zvi wrote a response from the âLW Orthodoxâ camp that wasnât convincing and Quintin responds against it here.
On âInner vs Outerâ framings for misalignment is also kinda confusing and not that easy to understand when put under scrutiny. Alex Turner points this out here, and even BlueDot have a whole âCriticisms of the inner/âouter alignment breakdownâ in their intro which to me gives the game away by saying âtheyâre useful because people in the field use themâ, not because their useful as a concept itself.
Finally, a lot of these concerns revolve around the idea of their being set, fixed, âinternal goalsâ that these models have, and represent internally, but are themselves immune from change, or can hide from humans, etc. This kind of strong âGoal Realismâ is a key part of the case for âDeceptionâ style arguments, whereas I think Belrose & Pope show an alternative way to view how AIs work is âGoal Reductionismâ, in which framing the issues imagined donât seem certain any more, as AIs are better understood as having âcontextually-activated heuristicsâ rather than Terminal Goals. For more along these lines, you can read up on Shard Theory.
Iâve become a lot more convinced about these criticisms of âAlignment Classicâ by diving into them. Of course, people donât have to agree with me (or the authors), but Iâd highly encourage EAs reading the comments on this post to realise Alignment Orthodoxy is not uncontested, and is not settled, and if you see people making strong cases based on arguments and analogies that seem not solid to you, youâre probably right, and you should look to decide for yourself rather than accepting that the truth has already been found on these issues.[1]
And this goes for my comments too
Iâm glad someone wrote this up, but I actually donât see much evaluation here from you, apart from âitâs too early to sayâ, but then Zhou Enlai pointed out that you could say that about the French Revolution,[1] and I think we can probably say some things. I generally have you mapped to the âright-wing Rationalistâ subgroup Arjun,[2] so itâd be actually interested to get your opinion instead of trying to read between the lines on what you may or may not believe. I think there was a pretty strong swing in Silicon Valley /â Tech Twitter & TPOT /â Broader Rationalism towards Trump, and I think this isnât turning out well, so Iâd actually be interested to see people saying what they actually thinkâbe that âI made a huge mistakeâ, âIt was a bad gamble but Harris wouldâve been worseâ or even âThis is exactly what I wantâ
Hey Cullen, thanks for responding! So I think there are object-level and meta-level thoughts here, and I was just using Jeremy as a stand-in for the polarisation of Open Source vs AI Safety more generally.
Object LevelâI donât want to spend too long here as itâs not the direct focus of Richardâs OP. Some points:
On âelite panicâ and âcounter-enlightenmentâ, heâs not directly comparing FAIR to it I think. Heâs saying that previous attempts to avoid democratisation of power in the Enlightenment tradition have had these flaws. I do agree that it is escalatory though.
I think, from Jeremyâs PoV, that centralization of power is the actual ballgame and what Frontier AI Regulation should be about. So one mention on page 31 probably isnât good enough for him. Thatâs a fine reaction to me, just as itâs fine for you and Marcus to disagree on the relative costs/âbenefits and write the FAIR paper the way you did.
On the actual points though, I actually went back and skim-listened to the the webinar on the paper in July 2023, which Jeremy (and you!) participated in, and man I am so much more receptive and sympathetic to his position now than I was back then, and I donât really find Marcus and you to be that convincing in rebuttal, but as I say I only did a quick skim listen so I hold that opinion very lightly.
Meta Level -
On the âescalationâ in the blog post, maybe his mind has hardened over the year? Thereâs probably a difference between ~July23-Jeremy and ~Nov23Jeremy, which he may view as an escalation from the AI Safety Side to double down on these kind of legislative proposals? While itâs before SB1047, I see Wiener had introduced an earlier intent bill in September 2023.
I agree that âpeople are mad at us, weâre doing something wrongâ isnât a guaranteed logic proof, but as you say itâs a good prompt to think âshould i have done something different?â, and (not saying youâre doing this) I think the absolutely disaster zone that was the sB1047 debate and discourse canât be fully attributed to e/âacc or a16z or something. I think the backlash Iâve seen to the AI Safety/âx-risk/âEA memeplex over the last few years should prompt anyone in these communities, especially those trying to influence policy of the worldâs most powerful state, to really consider Cromwellâs rule.
On this âyou will just in fact have pro-OS people mad at you, no matter how nicely your white papers are written.â I think thereâs some sense in which itâs true, but I think that thereâs a lot of contigency about just how mad people get, how mad they get, and whether other allies could have been made on the way. I think one of the reasons they got so bad is because previous work on AI Safety has understimated the socio-political sides of Alignment and Regulation.[1]
Again, not saying that this is referring to you in particular
I responded well to Richardâs call for More Co-operative AI Safety Strategies, and I like the call toward more sociopolitical thinking, since the Alignment problem really is a sociological one at heart (always has been). Things which help the community think along these lines are good imo, and I hope to share some of my own writing on this topic in the future.
Whether or not I agree with Richardâs personal politics or not is kinda beside the point to this as a message. Richardâs allowed to have his own views on things and other people are allowed to criticse this (I think David Mathersâ comment is directionally where I lean too). I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it. I think, while some tensions do exist, it wasnât inevitable that itâd get as bad as it is now, and in the end it was a particularly self-defeating one. Again, by doing the kind of thinking Richard is advocating for (you donât have to co-sign with his solutions, heâs even calling for criticism in the post!), we can hopefully avoid these failures in the future.
On the bounties, the one that really interests me is the OpenAI board one. I feel like Iâve been living in a bizarro-world with EAs/âAI Safety People ever since it happened because it seemed such a collosal failure, either of legitimacy or strategy (most likely both), and itâs a key example of the âun-cooperative strategyâ that Richard is concerned about imo. The combination of extreme action and ~0 justification either externally or internally remains completely bemusing to me and was big wake-up call for my own perception of âAI Safetyâ as a brand. I donât think people can underestimate the second-impact effect this bad on both âAI Safetyâ and EA, coming about a year after FTX.
Piggybacking on this comment because I feel like the points have been well-covered already:
Given that the podcast is going to have a tigher focus on AGI, I wonder if the team is giving any considering to featuring more guests who present well-reasoned skepticism toward 80kâs current perspective (broadly understood). While some skeptics might be so sceptical of AGI or hostile to EA they wouldnât make good guests, I think there are many thoughtful experts who could present a counter-case that would make for a useful episode(s).
To me, this comes from a case for epistemic hygiene, especially given the prominence that the 80k podcast has. To outside observers, 80kâs recent pivot might appear less as âevidence-based updatingâ and more as âsurprising and suspicious convergenceâ without credible demonstrations that the team actually understands opposing perspectives and can respond to the obvious criticisms. I donât remember the podcast featuring many guests who present a counter-case to 80ks AGI-bullishness as opposed to marginal critiques, and I donât particularly remember those arguments/âperspectives being given much time or care.
Even if the 80k team is convinced by the evidence, I believe that many in both the EA community and 80kâs broader audience are not. From a strategic persuasion standpoint, even if you believe the evidence for transformative AI and x-risk is overwhelming, interviewing primarily those already also convinced within the AI Safety community will likely fail to persuade those who donât already find that community credible. Finally, thereâs also significant value in âpressure testingâ your position through engagement with thoughtful critics, especially if your theory of change involves persuading people who are either sceptical themselves or just unconvinced.
Some potential guests who could provide this perspective (note, I donât these 100% endorse the people below, but just that they point the direction of guests that might do a good job at the above):
Melanie Mitchell
François Chollet
Kenneth Stanley
Tan Zhi-Xuan
Nora Belrose
Nathan Lambert
Sarah Hooker
Timothy B. Lee
Krishnan Rohit
I donât really get the framing of this question.
I suspect, for any increment of time one could take through EAs existence, then there would have been more âharmâ done in the total rest of world during that time. EA simply isnât big enough to counteract the moral actions of the rest of the world. Wild animals suffer horribly, people die of preventable diseases etc constantly, formal wars and violent struggles occur affecting the lives of millions. There sheer scale of the world outweighs EA many, many times over.
So I suspect youâre making a more direct comparison to Musk/âDOGE/âPEPFAR? But again, I feel like anyone wielding using the awesome executive power of the United States Government should expect to have larger impacts on the world than EA.
I think this is downstream of a lot of confusion about what âEffective Altruismâ really means, and I realise I donât have a good definition any more. In fact, because all of the below can be criticised, it sort of explains why EA gets seemingly infinite criticism from all directions.
Is it explicit self-identification?
Is it explicit membership in a community?
Is it implicit membership in a community?
Is it if you get funded by OpenPhilanthropy?
Is it if you are interested or working in some particular field that is deemed âeffectiveâ?
Is it if you believe in totalising utilitarianism with no limits?
To always justify your actions with quantitative cost-effectiveness analyses where youâre chosen course of actions is the top ranked one?
Is it if you behave a certain way?
Because in many ways I donât count as EA based off the above. I certainly feel less like one than I have in a long time.
For example:
I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
I donât know if this refers to some gestalt âbeliefâ than OP might have, or Dustinâs beliefs, or some kind of âintentional stanceâ regarding OPâs actions. While many EAs shared some beliefs (I guess) thereâs also a whole range of variance within EA itself, and the fundamental issue is that I donât know if thereâs something which can bind it all together.
I guess I think the question should be less âpublic clarification on the relationship between effective altruism and Open Philanthropyâ and more âwhat does âEffective Altruismâ mean in 2025?â
I mean I just donât take Ben to be a reasonable actor regarding his opinions on EA? I doubt youâll see him open up and fully explain a) who the people heâs arguing with are or b) what the explicit change in EA to an âNGO patronage networkâ was with names, details, public evidence of the above, and being willing to change his mind to counter-evidence.
He seems to have been related to Leverage Research, maybe in the original days?[1] And there was a big falling out there, any many people linked to original Leverage hate âEAâ with the fire of a thousand burning suns. Then he linked up with Samo Burja at Bismarck Analysis and also with Palladium, which definitely links him the emerging Thielian tech-right, kinda what I talk about here. (Ozzie also had a good LW comment about this here).
In the original tweet Emmett Shear replies, and then itâs spiralled into loads of fractal discussions, and Iâm still not really clear what Ben means. Maybe you can get more clarification in Twitter DMs rather than having an argument where heâll want to dig into his position publicly?
For the record, a double Leverage & Vassar connection seems pretty disqualifying to meâespecially as iâm very Bay sceptical anyway
I think the theory of change here is that the Abundance Agenda taking off in the US would provide an ideological frame for the Democratic Party to both a) get competitive in the races in needs to win power in the Executive & Legislature and b) have a framing that allows it to pursue good policies when in power, which then unlocks a lot of positive value elsewhere
It also answers the âwhy just the US?â question, though that seemed kind of obvious to me
And as for no cost-effectiveness calculation, it seems that this is the kind of systemic change many people in EA want to see![1] And itâs very hard to get accurate cost-effectiveness-analyses from those. But again, I donât know if thatâs also being too harsh to OP, as many longtermist organisations donât seem to publicly publish their CEAs apart from general reasoning like about âthe future could be very large and very goodâ
Maybe itâs not the exact flavour/âideology they want to see, but it does seem âsystemicâ to me
I think on crux here is around what to do in this face of uncertainty.
You say:
If you put a less unreasonable (from my perspective) number like 50% that weâll have AGI in 30 years, and 50% we wonât, then again I think your vibes and mood are incongruent with that. Like, if I think itâs 50-50 whether there will be a full-blown alien invasion in my lifetime, then I would not describe myself as an âalien invasion risk skepticâ, right?
But I think sceptics like titotal arenât anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they arenât saying âIâve looked at the evidence for AI Progress and am confident at putting it at less than 1%â or whatever, theyâre saying something more like âIâve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/âhype-based to me that Iâm not even giving it table stakesâ
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense youâd want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice thatâs not what people do. I think titotalâs experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the âtransformative near-term llm-based agiâ hypothesis to ânot a reasonable hypothesisâ
To them I feel itâs less someone asking âdonât put the space heater next to the curtains because it might cause a fireâ and more âdonât keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house downâ. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/âconcentration of power than loss-of-control to autonomous systems)
I appreciate the concern that you (and clearly many other Forum users) have, and I do empathise. Still, Iâd like to present a somewhat different perspective to others here.
EA seems far too friendly toward AGI labs and feels completely uncalibrated to the actual existential risk (from an EA perspective)
I think that this implicitly assumes that there is such a things as âan EA perspectiveâ, but I donât think this is a useful abstraction. EA has many different strands, and in general seems a lot more fractured post-FTX.
e.g. You ask âWhy arenât we publicly shaming AI researchers every day?â, but if youâre an AI-sceptical EA working in GH&D that seems entirely useless to your goals! If you take âweâ to mean all EAs already convinced of AI doom then thatâs assuming the conclusion, whether there is a action-significant amount of doom is the question here.
Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGIâyet somehow, EAs rally behind them? Iâm sure almost everyone agrees that Anthropic could contribute to existential risk, so why do they get a pass? Do we think their AGI is less likely to kill everyone than that of other companies?
Anthropicâs alignment strategy, at least publicly facing, is found here.[1] I think Chris Olahâs tweets about it found here include one particularly useful chart:
The probable cruxes here are that âAnthropicâ, or various employees there, are much more optimistic about the difficulty of AI safety than you are. They also likely believe that empirical feedback from actual Frontier models is crucial to a successful science of AI Safety. I think if you hold these two beliefs, then working at Anthropic makes a lot more sense from an AI Safety perspective.
For the record, the more technical work Iâve done, and the more understanding I have about AI systems as they exist today, the more âalignment optimisticâ Iâve got, and I get increasingly skeptical of OG-MIRI-style alignment work, or AI Safety work done in the absence of actual models. We must have contact with reality to make progress,[2] and I think the AI Safety field cannot update on this point strongly enough. Beren Millidge has really influenced my thinking here, and Iâd recommend reading Alignment Needs Empirical Evidence and other blog posts of his to get this perspective (which I suspect many people at anthropic share).
Finally, pushing the frontier of model performance isnât apriori bad, especially if you donât accept MIRI-style arguments. Like, I donât see Sonnet 3.7 as increasing the risk of extinction from AI. In fact, it seems to be both a highly capable model thatâs also very-well aligned according to Anthropicâs HHH criteria. All of my experience using Claude and engaging with the research literature about the model has pushed my distribution of AI Safety towards the âSteam Engineâ level in the chart above, instead of the P vs NP/âImpossible level.
Spending time in the EA community does not calibrate me to the urgency of AI doomerism or the necessary actions that should follow
Finally, on the ânecessary actionsâ point, even if we had a clear empirical understanding of what the current p(doom) is, there are no clear necessary actions. Thereâs still lots of arguments to be had here! See Matthew Barnett has argued in these comments that one can make utilitarian arguments for AI acceleration even in the presence of AI risk,[3] or Nora Belrose arguing that pause-style policies will likely be net-negative. You donât have to agree with either of these, but they do mean that there arenât clear ânecessary actionsâ, at least from my PoV.
Of course, if one has completely lost trust with Anthropic as an actor, then this isnât useful information to you at all. But I think thatâs conceptually a separate problem, because I think have given information to answer the questions you raise, perhaps not to your satisfaction.
Theory will only take you so far
Though this isnât what motivates Anthropicâs thinking afaik
To the extent that word captures the classic âsingle superintelligent modelâ form of risk
I have some initial data on the popularity and public/âelite perception of EA that I wanted to write into a full post, something along the lines of What is EAâs reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.
My initial data/âinvestigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:
Declining use of the Forum, both in total and amongst influential EAs
EA has a very poor reputation in the public intellectual sphere, especially on Twitter
Many previously highly engaged/âtalented users quietly leaving the movement
An increasing philosophical pushback to the tenets of EA, especially from the New/âAlt/âTech Right, instead of the more common âthe ideas are right, but the movement is wrong in practiceâ[1]
An increasing rift between Rationalism/âLW and EA
Lack of a compelling âfightbackâ from EA leaders or institutions
Doing this research did contribute to me being a lot more gloomy about the state of EA, but I think I do want to write this one up to make the knowledge more public, and allow people to poke flaws in it if possible.
To me this signals more values-based conflict, which makes it harder to find pareto-improving ways to co-operate with other groups
I do want to write something along the lines of âAlignment is a Political Philosophy Problemâ
My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical âcorrectâ utility function for a hypothesised superintelligence. Making TAI/âAGI/âASI go well therefore falls in the reference class of âprincipal agent problemâ/ââpublic choice theoryâ/ââsocial contract theoryâ rather than âtimeless decision theory/âcoherent extrapolated volitionâ. The latter 2 are poor answers to an incorrect framing of the question.
Writing that influenced my on this journey:
Tan Zhi Xuanâs whole work, especially Beyond Preferences in AI Alignment
Joe Carlsmithâs Otherness and control in the age of AGI sequence
Matthew Barnettâs various posts on AI recently, especially viewing it as an âinstitutional designâ problem
Nora Belroseâs various posts on scepticism of the case for AI Safety, and even on Safety policy proposals conditional on the first case being right.[1]
The recent Gradual Disempowerment post is something along the lines Iâm thinking of too
I also think this view helps explain the huge range of backlash that AI Safety received over SB1047 and after the awfully botched OpenAI board coup. They were both attempted exercises in political power, and the pushback often came criticising this instead of looking on the âobject levelâ of risk arguments. I increasingly think that this is not an âirrationalâ response but perfectly thing, and âAI Safetyâ needs to pursue more co-operative strategies that credibly signal legitimacy.
I think the downvotes these got are, in retrospect, a poor sign for epistemic health
I donât think anyone wants or needs another âWhy Iâm leaving EAâ post but I suppose if people really wanted to hear it I could write it up. Iâm not sure I have anything new or super insightful to share on the topic.
My previous attempt at predicting what I was going to write got 1â4, which ainât great.
This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.
For the things that I am still thinking of writing Iâll add comments to this post separately to votes and comments can be attributed to each idea individually.
Not to self-promote too much but I see a lot of similarities here with my earlier post, Gradient Descent as an analogy for Doing Good :)
I think they complement each other,[1] with yours emphasising the guidance of the âmoral peakâ, and mine warning against going too straight and ignoring the ground underneath you giving way.
I think there is an underlying point that cluelessness wins over global consequentialism, which is pratically unworkable, and that solid moral heuristics are a more effective way of doing good in a world with complex cluelessness.
Though you flipped the geometry for the more intuitive âreaching a peakâ rather than the ML-traditional âdescending a valleyâ
I also think itâs likely that SMA believes that for their target audience it would be more valuable to interact with AIM than with 80k or CEA, not necessarily for the 3 reasons you mention.
I mean the reasoning behind this seems very close to #2 no? The target audience theyâre looking at is probably more interested in neartermism than AI/âlongtermism and they donât think they can get much tractability working with the current EA ecosystem?
The underlying idea here is the Housing Theory of Everything.
A lossy compression of the idea is that if you fix the housing crisis in Western Economies, youâll unlock positive outcomes across economic, social, and political metrics which you can then have high positive impact.
A sketch, for example, might be that you want the UK government to do lots of great stuff in AI Safety. But UK state capacity in general might be completely borked until it sorts out its housing crisis.
Noteâthis was written kinda quickly, so might be a bit less tactful than I would write if I had more time.
Making a quick reply here after binge listening to three Epoch-related podcasts in the last week, and I basically think my original perspective was vindicated. It was kinda interesting to see which points were repeated or phrased a different wayâwould recommend if your interested in the topic.
The initial podcast with Jaime, Ege, and Tamay. This clearly positions the Epoch brain trust as between traditional academia and the AI Safety community (AISC). tl;drâacademia has good models but doesnât take ai seriously, and AISC the opposite (from Epochâs PoV)
The âdebateâ between Matthew and Ege. This should have clued people in, because while full of good content, by the last hour/âhour and half it almost seemed to turn into âopenly mocking and laughingâ at AISC, or at least the traditional arguments. I also donât buy those arguments, but I feel like the reaction Matthew/âEge have shows that they just donât buy the root AISC claims.
The recent podcast Dwarkesh with Ege & Tamay. This is the best of the 3, but probably also best listened too after the first too, since Dwarkesh actually pushes back on quite a few claims, which means Ege & Tamay flush out their views moreâpersonal highlight was what the reference class for AI Takeover actually means.
Basically, the Mechanize cofounders donât agree at all with âAI Safety Classicâ, I am very confident that they donât buy the arguments at all, that they donât identify with the community, and somewhat confident that they donât respect the community or its intellectual output that much.
Given that their views are: a) AI will be a big deal soon (~a few decades), b) returns to AI will be very large, c) Alignment concerns/âAI risks are overrated, and d) Other people/âinstitutions arenât on the ball, then starting an AI Start-up seems to make sense.
What is interesting to note, and one I might look into in the future, is just how much these differences in expectation of AI depend on differences in worldview, rather than differences in technical understanding of ML or understanding of how the systems work on a technical level.
So why are people upset?
Maybe they thought the Epoch people were more part of the AISC than they actually were? Seems like the fault of the people believe this, not Epoch or the Mechanize founders.
Maybe people are upset that Epoch was funded by OpenPhil, and this seems to have lead to âAI accelerationâ? I think thatâs plausible, but Epoch has still produced high-quality reports and information, which OP presumably wanted them to do. But I donât think equating EA == OP, or anyone funded by OP, is a useful concept to me.
Maybe people are upset at any progress in AI capabilities. But that assumes that Mechanize will be successful in its aims, not guaranteed. It also seems to reify the concept of âcapabilitiesâ as one big thing which i donât think makes sense. Making a better Stockfish, or a better AI for FromSoft bosses does not increase x-risk, for instance.
Maybe people think that the AI Safety Classic arguments are just correct and therefore people taking actions other than it. But then many actions seem bad by this criteria all the time, so odd this would provoke such a reaction. I also donât think EA should hang its hat on âAI Safety Classicâ arguments being correct anyway.
Probably some mix of it. I personally remain not that upset because a) I didnât really class Epoch as âpart of the communityâ, b) Iâm not really sure Iâm âpart of the communityâ either and c) my views are at least somewhat similar to the Epoch set above, though maybe not as far in their direction, so Iâm not as concerned object-level either.