Yeah, I think this is a quite important point that’s sort-of captured by the other paths you mention, but (in hindsight) not sufficiently highlighted/emphasised.
I think another possible example is Allan Dafoe—I don’t know his full “origin story”, and it’s possible he was already very EA-aligned as a junior researcher, but I think his actual topic selection and who he worked with switched quite a lot (and in an EA-aligned direction) after he was already fairly senior. And that seniority allowed him to play a key role in GovAI, which was (in my view) extremely valuable.
One place where I kind-of nod to the path you mention is:
Increasing and/or improving research by non-EAs on high-priority topics [...]
In addition to improve the pipeline for EA-aligned research produced by non-EAs, this might also improve the pipeline for EA-aligned researchers, such as by:
Causing longer-term shifts in the views of some of the non-EAs reached
Making it easier for EAs’ to use non-EA options for research training, credentials, etc. (see my next post)
I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It shows a dragonfly and then, I don’t know, a primate, and then a human, and then all humans.
Now, that correspondence is hugely problematic. There’s lots we could say about why that’s not a sensible thing to do, but what I think it did communicate was that the likely extrapolation of trends are such that you are going to have very powerful computers within a hundred years. Who knows exactly what that means and whether, in what sense, it’s human level or whatnot, but the fact that this trend is coming on the timescale it was was very compelling to me. But at the time, I thought Kurzweil’s projection of the social dynamics of how extremely advanced AI would play out unlikely. It’s very optimistic and utopian. I actually looked for a way to study this all through my undergrad. I took courses. I taught courses on technology and society, and I thought about going into science writing.
And I started a PhD program in science and technology studies at Cornell University, which sounded vague and general enough that I could study AI and humanity, but it turns out science and technology studies, especially at Cornell, means more a social constructivist approach to science and technology.
. . .
Okay. Anyhow, I went into political science because … Actually, I initially wanted to study AI in something, and I was going to look at labor implications of AI. Then, I became distracted as it were by a great power politics and great power peace and war. It touched on the existential risk dimensions that I didn’t have the word for it, but was sort of a driving interest of mine. It’s strategic, which is interesting. Anyhow, that’s what I did my PhD on, and topics related to that, and then my early career at Yale.
I should say during all this time, I was still fascinated by AI. At social events or having a chat with a friend, I would often turn to AI and the future of humanity and often conclude a conversation by saying, “But don’t worry, we still have time because machines are still worse than humans at Go.” Right? Here is a game that’s well defined. It’s perfect information, two players, zero-sum. The fact that a machine can’t beat us at Go means we have some time before they’re writing better poems than us, before they’re making better investments than us, before they’re leading countries.
Well, in 2016, DeepMind revealed AlphaGo, and it was almost this canary in the coal mine, that Go was to me, that was sort of deep in my subconscious keeled over and died. That sort of activated me. I realized that for a long time, I’d said post tenure I would start working on AI. Then, with that, I realized that we couldn’t wait. I actually reached out to Nick Bostrom at the Future of Humanity Institute and began conversations and collaboration with them. It’s been exciting and lots of work to do that we’ve been busy with ever since.
I think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn’t aware of or engaged with the EA community (though he doesn’t explicitly say that), and so he still seems like sort-of an example.
It also sounds like he wasn’t actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about—in this case, maybe the “intervention (for improving the EA-aligned research pipeline)” was something like Bostrom’s public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention?
(But that’s just going from that quote and my vague knowledge of Allan.)
I think the crux to me is to what extent Allan’s involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom’s calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).
I roughly agree, though would also note that the step could be useful by merely speeding up an overdetermined career move, e.g. if Allan would’ve ended up doing similar stuff anyway but only 5 years later.
Yeah, I think this is a quite important point that’s sort-of captured by the other paths you mention, but (in hindsight) not sufficiently highlighted/emphasised.
I think another possible example is Allan Dafoe—I don’t know his full “origin story”, and it’s possible he was already very EA-aligned as a junior researcher, but I think his actual topic selection and who he worked with switched quite a lot (and in an EA-aligned direction) after he was already fairly senior. And that seniority allowed him to play a key role in GovAI, which was (in my view) extremely valuable.
One place where I kind-of nod to the path you mention is:
I don’t think Alan’s really an example of this.
https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/
I think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn’t aware of or engaged with the EA community (though he doesn’t explicitly say that), and so he still seems like sort-of an example.
It also sounds like he wasn’t actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about—in this case, maybe the “intervention (for improving the EA-aligned research pipeline)” was something like Bostrom’s public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention?
(But that’s just going from that quote and my vague knowledge of Allan.)
Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.
I think the crux to me is to what extent Allan’s involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom’s calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).
I roughly agree, though would also note that the step could be useful by merely speeding up an overdetermined career move, e.g. if Allan would’ve ended up doing similar stuff anyway but only 5 years later.
Yes, I agree that speeding up career moves is useful.