Hmm true, I gave it the whole Greater Wrong page of comments, maybe it just didn’t quote from those for some reason.
OscarD🔸
fyi for anyone like me who doesn’t have lots of the backstory here and doesn’t want to read through Habryka’s extensive corpus of EAF writings, here is Claude 3.7 Sonnet’s summary based on the first page of comments Habryka links to.
Based on Habryka’s posts, I can provide a summary of his key disagreements with EA leadership and forum administrators that ultimately led to his decision to leave the community.
Key Disagreements
Leadership and Accountability: Habryka repeatedly expresses concern about what he sees as a “leaderless” EA community. He believes the community has shifted from being driven by independent intellectual contributors to being determined by “a closed-off set of leaders with little history of intellectual contributions.” He argues that almost everyone who was historically in leadership positions has stepped back and abdicated their roles.
Institutional Integrity: He criticizes EA organizations, particularly CEA (Centre for Effective Altruism), for prioritizing growth, prestige, and public image over intellectual integrity. In his posts, he describes personal experiences at CEA where they “deployed highly adversarial strategies” to maintain control over EA’s public image and meaning.
FTX Situation: Habryka was particularly critical of how EA leadership handled Sam Bankman-Fried (SBF) and FTX. He claims to have warned people about SBF’s reputation for dishonesty, but these warnings were not heeded. He criticizes Will MacAskill and others for their continued endorsement of SBF despite red flags, and was frustrated by the lack of transparency and open discussion after FTX’s collapse.
Risk-Aversion and PR Focus: He repeatedly criticizes what he perceives as excessive risk-aversion and PR-mindedness among EA organizations. He argues this approach prevents honest discussion of important issues and contributes to a culture of conformity.
Funding Centralization: Habryka expresses concern about EA funding being increasingly centralized through a single large foundation (likely referring to Open Philanthropy), arguing this concentration of resources creates unhealthy power dynamics.
Community Culture: He criticizes the shift in EA culture away from what he describes as “a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo” toward something more institutional and conformist.
Failure to Create Change: Habryka states that he no longer sees “a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community,” particularly in domains like AI safety research and community governance.
His departure post suggests a deep disillusionment with the direction of the EA community, expressing that while many of the principles of EA remain important, he believes “EA at large is causing large harm for the world” with “no leadership or accountability in-place to fix it.” He recommends others avoid posting on the EA Forum as well, directing them to alternatives like LessWrong.
I think I would have found this more interesting/informative if the scenarios (or other key parts of the analysis) came with quantitative forecasts. I realise of course this is hard, but without this I feel like we are left with many things being ‘plausible’. And then do seven “plausible”s sum to make a “likely”? Hard to say! That said, I think this could be a useful intro to arguments for short timelines to people without much familiarity with this discourse.
Good points, I agree with this, trends 1 and 3 seem especially important to me. As you note though the competitive (and safety) reasons for secrecy and research automation probably dominate.
Another thing that current trends in AI progress means though is that it seems (far) less likely that the first AGIs will be brain emulations. This in turn makes it less likely AIs will be moral patients (I think). Which I am inclined to think is good, at least until we are wise and careful enough to create flourishing digital minds.
Two quibbles:
“Given the amount of money invested in the leading companies, investors are likely to want to take great precautions to prevent the theft of their most valuable ideas.” This would be nice, but companies are generally only incentivised to prevent low-resourced actors steal their models. To put in enough effort to make it hard for sophisticated attackers (e.g. governments) to steal the models is a far heavier lift and probably not something AI companies will do of their own accord. (Possibly you already agree with this though.
“The power of transformer-based LLMs was discovered collectively by a number of researchers working at different companies.” I thought it was just Google researchers who invented the Transformer? It is a bit surprising they published it, I suppose they just didn’t realise how transformative it would be, and there was a culture of openness in the AI research community.
My sense is that of the many EAs who have taken EtG jobs quite a few have remained fairly value-aligned? I don’t have any data on this and am just going on vibes, but I would guess significantly more than 10%. Which is some reason to think the same would be the case for AI companies. Though plausibly the finance company’s values are only orthogonal to EA, while the AI company’s values (or at least plans) might be more directly opposed.
The comment that Ajeya is replying to is this one from Ryan, who says his timelines are roughly the geometric mean of Ajeya’s and Daniel’s original views in the post. That is sqrt(4*13) = 7.2 years from the time of the post, so roughly 6 years from now.
As Josh says, the timelines in the original post were answering the question “Median Estimate for when 99% of currently fully remote jobs will be automatable”.
So I think it was a fair summary of Ajeya’s comment.
NYT article on challenge trials
There is some discussion of strategy 4 on LW at the moment: https://www.lesswrong.com/posts/JotRZdWyAGnhjRAHt/tail-sp-500-call-options
Good point, I agree that ideally that would be the case, but my impression (from the outside) is that OP is somewhat capacity-constrained, especially for technical AI grantmaking? Which I think would mean if non-OP people feel like they can make useful grants now that could still be more valuable given the likelihood that OP scales up and gets more AI grantmaking in coming years. But all that is speculation, I haven’t thought carefully about the value of donations over time, beyond deciding to not save all my donations for later for me personally.
I suppose it depends whether the counterfactual is the two parties to the bet donate the 10k to their preferred causes now, or donate the 10k inflation adjusted in 2029, or don’t donate it at all. Insofar as we think donations now are better (especially for someone who has short AI timelines) there might be a big difference between the value of money now vs the value of money after (hypothetically) winning the bet.
Altman on the board, AGI, and superintelligence
Good on you all!
Does anyone know whether CE/AIM has looked into this, and if not it seems like they should? Great that you guys have already started something so now maybe there is no need to go via their incubation program, but conversely they might still have a significant value add in terms of networks + advice + funding. I’m not sure who the relevant CE person to ask would be.
Thanks for writing this up, I just looked back at the results of a generic blood test measuring many different things I did earlier in the year and I had a creatinine value of 0.82 (the reference range was given as 0.7-1.3).
I haven’t looked through the literature you cited, do you happen to know if I am already in the healthy range whether it is still helpful to be supplementing, or if it is bad to go over 1.3 if I do supplement?
I agree that 5 (accepting OP-dominated balance sheets) seems like the best solution.
I think a different but related point is that an org that can fundraise outside of EA is that much more valuable than an org producing identical outputs but fundraising from within EA. The big example of this of course is GiveWell—using EA principles but getting money from a far wider set of people. Raising $1 from OP (and even more so other EA sources) has pretty direct opportunity costs for other high-impact projects, but raising $1 from someone else mainly trades off against that donor’s consumption or their other donations which we (putatively) think are a lot less impactful.
EA Australia and LTFF. Reflections at https://forum.effectivealtruism.org/posts/7FufeFhDE7Fp9i3fr/five-years-of-donating
Five years of donating
I found this a really clear and useful explanation (though I already had a decent idea how NAO worked)!
If ever you want to reach a broader audience, I think making an animated video based on this content, maybe with the help of Rational Animations or Kurtzgesagt, would work well.
Assuming a key inefficiency of the nasal swabs method is the labour costs of people collecting them, is the process straightforward enough that you could just set up an unmanned sample collection place where in a busy building somewhere people can just swab themselves and drop the sample in a chute or box or something? Hopefully post-Covid people are fairly familiar with nasal swabbing technique.
Thanks for sharing the raw data!
Interestingly, of the 44 people who ranked every charity, the candidates with most last-placed votes were: PauseAI = 10, VidaPlena = ARMoR = 5, Whylome = 4, SWP = AMF = Arthropoda = 3, … . This is mostly just noise I’m guessing, except perhaps that it is some evidence PauseAI is unusually polarising and a surprisingly large minority of people think it is especially bad (net negative, perhaps).
Also here is the distribution of how many candidates people ranked:
I am a bit surprised there were so many people who voted for none of the winning charities—I would have thought most people would have some preference between the top few candidates, and that if their favourite charity wasn’t going to win they would prefer to still choose between the main contenders. Maybe people just voted once initially and then didn’t update it based on which candidates had a chance of winning.
I think the main reason to update one’s vote based on the results is if you voted number 1 for a charity that is first or second, but a charity you also quite like is e.g. fourth or fifth, then strategically switching to rank the latter first would make sense. But this was not the case for me.
Overall my guess is the live vote tallies adds to the excitement but doesn’t actually contribute much epistemically?
Pablo and I were trying to summarise the top page of Habryka’s comments that he linked to (~13k words) not this departure post itself.