A while ago -- 2017, maybe? -- I remember attending EA Global in San Francisco, where Will MacAskill gave as either the keynote or the closing talk an address on the theme “Keep EA Weird”. Do people still support this ideal? I notice that GoodVentures recently stopped funding some “weirder” cause areas, for instance.
It’s at least possible to me that as EA has gotten bigger, some of the “weird” stuff has been pushed to the margins more and that’s correct, but I’m not sure I’ve seen a detailed discussion of this, at least when it comes to cause areas rather than debates about polyamory and the like.
Davis_Kingsley
I really don’t think—at all—that one’s ability to give talks at EAG is at all centrally based on whether Emile Torres has denounced you on Twitter or whatever. As I understand it Torres has gone after a long list of prominent EA figures for various reasons (including Nick Bostrom, Will, Toby, etc.) who continue to be quite involved.
(Disclaimer: I worked in events for CEA some years ago but was not involved with managing the admissions process for EAG, selecting EAG keynote speakers, etc. -- indeed I am not even sure who all is on that team at present.)
Thanks for the edits!
I indeed attended LessOnline for a day, but not Summer Camp or Manifest; while there I didn’t notice the “race science” angle you mention but I was only there for a day and spent a bunch of that time presenting classes/sessions on rationality stuff and then talking to people afterwards, so you probably have a broader sense of what was present during “the events as a whole” than I do.
This is pretty concerning to me (as someone who didn’t attend Manifest but very well might have under other circumstances). I knew Hanania had been at Manifest before and would perhaps be there again, but didn’t realize the event overall had this level of “race science” presence? I hope the Manifest organizers take action to change that for future events, and in general have thought some of Manifold’s recent decisions seemed rather too “edgy”/sensationalist/attention-seeking (not sure of the right word here...) for my taste.
However, this post also rubs me the wrong way a bit because it seems to conflate a bunch of things that I don’t think are appropriate.
To name some quick examples that I don’t intend to get into in detail:I think the Guardian article was seriously flawed well beyond any issues with Manifest
I think Vassar’s group has been broadly separate from the rationalists for many years now
I don’t much agree with your characterization of rationalists vs. EAs
The other thing that really gets me about this post, though, is your conclusion:
But who knows, maybe next time half the people there will consist of Republicans and the Thielosphere.
I think conflating Republicans, the “Thielosphere” and (implicitly) these “scientific racists” is really bizarre and extreme.
My understanding is that surveys of EA and adjacent communities generally indicate that EA has a very major political skew towards liberal/progressive beliefs. [1] I consider this a serious weakness and potential failure mode for the movement—if we end up becoming just another political thing it could really curtail EA’s potential. The idea of “Republicans” being conflated with this more extreme stuff strikes me as a bad sign.
Quite frankly if someone told me that an EA Global next year was half Republicans/conservative-leaning people, I would consider that likely a major success in terms of diversification and avoiding political skew, and it would significantly increase my optimism for EA as a whole. It seems bizarre to use that sort of thing (admittedly with a far less central event than EA Global) as a “failure condition” here. (And I’m not even a Republican!)
[1] See for instance this post, which found >75% of EAs identified as center-left or left and less than 3% identified as center-right or right! I believe SSC/ACX community surveys also tend to show a strongly left-leaning readership, though with a less dramatic slant.
Yes, to be clear I’m not criticizing the initial decision to run but rather the dubious impact estimates and calls to action towards the end of that campaign.
I find myself quite skeptical of this analysis following the dramatically failed predictions (and more direct calls to action) regarding the tractability of the Carrick Flynn campaign in 2022, which now seems like a major blunder. If anything I think there’s a stronger case for that sort of thing than there is for national presidential elections...
Thanks for the tip! Just donated my mana to GiveDirectly.
Congratulations Niel! Best of luck with the future of 80k!
Whatever happened to AppliedDivinityStudies, anyway? Seemed to be a promising blog adjacent to the community but I just checked back to see what the more recent posts were and it looks to have stopped posting about a year ago?
In general I think “TESCREAL” is a bad term that conflates a bunch of different things in order to attack them all as a group and I’d prefer not to see it used.
I consider this sort of “oh, I have a take but you guys aren’t good enough for it” type perspective deeply inappropriate for the Forum—and I say that as someone who is considerably less “anti-Anthropic” than some of the comments here.
I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.
Amazon to invest up to $4 billion in Anthropic
Congrats Ben, and count me in as another voice in favor of this type of humor on the Forum!
Yes, to be clear I don’t think Oli was necessarily claiming that—I was replying to Jonas here, who listed Tara as one of “the Leverage people” in his own comment.
Wait, was Tara a Leverage person? Kerry and Larissa work for Leverage now and Tyler was affiliated in the past, but I wasn’t under the impression Tara was particularly involved with Leverage—though I could of course be wrong!
A while ago I remember seeing some discussion of EA analysis of Ukraine relief following the Russian invasion—perhaps some EAs from Poland were involved? Did this ever get comprehensively written up anywhere?
Davis_Kingsley’s Quick takes
I quite suspect people at Anthropic are already thinking of considerations like this when deciding what to do and am not sure that an anonymous post is needed here.
It’s not precisely OpenPhil, but GoodVentures’ recent surprise withdrawal from several cause areas and refusal to even publicly say what all the areas they were withdrawing from were comes to mind...