[Question] What evidence is there for (or against) theories about the extent to which effective altruist interests motivated the ouster of Sam Altman last year?

There was of course initially much speculation that effective altruism was solely responsible. That was a common accusation from some among the effective accelerationism community (https://​​en.wikipedia.org/​​wiki/​​Effective_accelerationism), a.k.a. “e/​​acc.” Many of them may remain forever convinced of that, in spite of whatever accurate conclusion may eventually be substantiated.

It’s nonetheless important for the EA community to try understanding how some of those among its ranks other than former OpenAI board members may have impacted the course of events leading up to and after Sam Altman was fired as CEO of OpenAI. Obviously, the ethos of EA in relation to AI safety played a role. The choices made by Helen Toner and Tasha McCauley in their multiple professional roles related to AI safety/​risk would obviously be influenced by their prior affinity for EA, and views on AI safety common among the community. The same is of course true for Holden Karnofsky, also a former OpenAI board member.

Yet since the infamous course of events has ended, there is still widespread doubt that the board directors who fired Sam Altman acted alone. Public speculation persists that they must have acted in part on the behest or direction of other influential figures in or around the EA community. Almost two months out from Sam Altman being fired, it’s worth checking if more info, other than the usual speculation on social media, has surfaced as to the definite motives for that move being taken. That of course includes other evidence substantiating motives not directly to AI safety.

In the last 2 months, from within the EA and rationality communities themselves, there has been surprisingly little review of such potential evidence, or a re-evaluation of the still incomplete narrative that had taken shape by December. There appears to have been one relevant post on the subject on each of the EA Forum, and LessWrong, respectively

On November 22 of last year, effective altruist Garrison Lovely published a Substack post, largely in response to a report from Semafor, challenging the notion that EA as a philosophy and movement is squarely and singularly to blame (https://​​garrisonlovely.substack.com/​​p/​​is-effective-altruism-really-to-blame). Specifically, Garrison points out that:

  1. OpenAI board member Adam D’Angelo, while being a close colleague of Dustin Moskowitz, doesn’t publicly or personally appear to have a substantive relationship with EA.

  2. The fact that many AI researchers, like Ilya Sutskever, share the same concerns about existential AI safety/​risk as many EAs doesn’t entail that they all received their views from EA as a philosophy/​movement. AI existential risk is a prioritized cause within EA, though it’s also a distinct field with a history of beginning to spread since before EA was founded as a movement.

On December 5th of last year, a user going by the handle ‘mrtreasure’ shared on LessWrong a defense of the choices taken by Adam D’Angelo, Helen Toner and Tasha McCauley in the course of initially firing Sam Altman, but also during its aftermath and resolution (https://​​www.lesswrong.com/​​posts/​​nfsmEM93jRqzQ5nhf/​​in-defence-of-helen-toner-adam-d-angelo-and-tasha-mccauley-1).

It was originally posted anonymously on Pastebin. The author distinguishes himself as not particularly involved in EA, though demonstrates an understanding of the rationality, AI safety and EA communities on their own terms (as opposed to any of them being, say, a communist conspiracy meticulously planned for a decade from within Big Tech, under the noses of the industry’s powerbrokers). The post presents a rationale for how OpenAI’s then-directors acted of their own accord as individuals. It also kind of serves as an open letter to the EA and AI safety communities encouraging them to not simply denounce the actions taken.

Together, these posts represent a characterization of the course of events that is sympathetic to EA and AI safety, but without presuming interference from anyone else from in the community at large. It’s a far more parsimonious explanation than potentially baseless conspiracy theories that may be the only alternative narrative credulous onlookers, none the wiser, have been exposed to. A quite possibly still confused public audience, previously unfamiliar with the parties involved, or the surrounding communities that have claimed a stake in the outcome (e.g., EA, e/​acc, etc.), might find comfort in it.

For perhaps most effective altruists, myself included, it seems like a common-sense story of how the events played out. Some theories proposed by those on some other sides of the conflict have frankly been ridiculous. Yet how comfortable the simpler stories we’ve been telling ourselves might make us feel, it doesn’t change the fact there are major holes and unanswered questions in our shared understanding/​model of how it all transpired.

That begs the question: what other clues that have been overlooked, if any, are now out there as to what exactly motivated Sam Altman’s ouster?

Crossposted to LessWrong (10 points, 0 comments)
No comments.