This is about informing how much I should defer to EA on what issues matter most. Is EA’s turn to longtermism a good reason in itself for me to have turned to longtermism?
One story, the most flattering to EA, goes like this:
“EA is unusually good at ‘epistemics’ / thinking about things, because of its culture and/or who it selects for; and also the community isn’t corrupted too badly by random founder effects and information cascades; and so the best ideas gradually won out among those who were well-known for being reasonable, and who spent tons of time thinking about the ideas. (E.g. Toby Ord convincing Will MacAskill, and a bit later Holden Karnofsky joining them.)”
Of course, there could be other stories to be told, to do with ‘who worked in the same building as who’ and ‘what memes were rife in the populations that EA targeted outreach to’ and ‘what random contingent things happened, e.g. a big funder flipping from global health to animals and creating 10 new institutes’ and ‘who was on Felicifia back in the day’ and ‘did anyone actively try to steer EA this way’. Ideally, I’d like to run a natural experiment where we go back in time to 2008, have MacAskill and Ord and Bostrom all work in different countries rather than all in Oxford, and see what changes. (Possibly Peter Singer is a real-life instance of this natural experiment, akin to how the evolution of Australia’s mammals, birds, and marsupials diverged from the rest of the world when the continent separated from Gondwanaland in the Mesozoic-Palaeocene epochs. Not that Peter is that old.)
But maybe looking at leadership is the wrong way around, and it’s the rank-and-file members who led the charge. I’d be very interested to know if so. (One thing I could look at is ‘how much did the sentiment on this forum lag or lead the messaging from the big orgs?’)
I understand EA had x-risk elements from the very beginning (e.g. Toby Ord), but it was only in the late 2010s that it came to be the dominant strain. Most of us only joined the movement while this longtermist turn was already well underway — I took the GWWC pledge in 2014 but checked out of EA for a few years afterwards, returning in 2017 to find x-risk a lot more dominant, and the movement 2 to 3 times bigger — and we have no direct experience of the shift, so we can only ask our elders how it happened, and thence decide ‘to what degree was the shift caused by stuff that seems correlated with believing true things?’. It would be a shame if anecdata about the shift were lost to cultural memory, hence this question.
https://www.openphilanthropy.org/research/three-key-issues-ive-changed-my-mind-about/
Came here to cite the same thing! :)
Note that Dustin Moskovitz says he’s not a longtermist, and “Holden isn’t even much of a longtermist.”
So my intuition is that the two main important updates EA has undergone are “it’s not that implausible that par-human AI is coming in the next couple of decades” and “the world is in fact dropping the ball on this quite badly, in the sense that maybe alignment isn’t super hard, but to a first approximation no one in the field has checked.”
(Which is both an effect and a cause of updates like “maybe we can figure stuff out in spaces where the data is more indirect and hard-to-interpret”, “EA should be weirder”, “EA should focus more on research and intellectual work and technical work”, etc.)
But I work in AI x-risk and naturally pay more attention to that stuff, so maybe I’m missing other similarly-deep updates that have occurred. Like, maybe there was a big update at some point about the importance of biosecurity? My uninformed guess is that if we’d surveyed future EA leaders in 2007, they already would have been on board with making biosecurity a top global priority (if there are tractable ways to influence it), whereas I think this is a lot less true for AI alignment.
My sense is it was driven largely by a perception of faster-than-expected progress in deep learning along with (per Carl’s comment) a handful of key people prominently becoming more concerned with it.
There might also just have been a natural progression. Toby Ord was always concerned about it, and 80,000 Hours made it a focus from very early on. At one relatively early point I had the impression that they considered shifting someone from almost any career path into AI-related work as their primary metric for success. I couldn’t justify that impression now, and suspect it’s an unfair one, but I note it mainly as an anecdote that someone was able to form that impression, well before the ‘longtermist turn’.
Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file. There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there saw the global poverty cause as a way to funnel people towards AI risk.
I think the AI-risk faction started to assert itself more strongly in EA from about 2015, successfully persuading other major leader figures one by one over the following years (e.g. Holden in 2016, as linked to by Carl). But by then I wasn’t following EA closely, and I don’t have a good sense of the timeline.