I’m a researcher at Forethought; before that, I ran the non-engineering side of the EA Forum (this platform), ran the EA Newsletter, and worked on some other content-related tasks at CEA. [More about the Forum/CEA Online job.]
Selected posts
Background
I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I later switched to the Online Team. In the past, I’ve also done some (math) research and worked at Canada/USA Mathcamp.
Yeah, I guess I don’t want to say that it’d be better if the team had people who are (already) strongly attached to various specific perspectives (like the “AI as a normal technology” worldview—maybe especially that one?[1]). And I agree that having shared foundations is useful / constantly relitigating foundational issues would be frustrating. I also really do think the points I listed under “who I think would be a good fit” — willingness to try on and ditch conceptual models, high openness without losing track of taste, & flexibility — matter, and probably clash somewhat with central examples of “person attached to a specific perspective.”
= rambly comment, written quickly, sorry! =
But in my opinion we should not just all (always) be going off of some central AI-safety-style worldviews. And I think that some of the divergence I would like to see more of could go pretty deep—e.g. possibly somewhere in the grey area between what you listed as “basic prerequisites” and “particular topics like AI timelines...”. (As one example, I think accepting terminology or the way people in this space normally talk about stuff like “alignment” or “an AI” might basically bake in a bunch of assumptions that I would like Forethought’s work to not always rely on.)
One way to get closer to that might be to just defer less or more carefully, maybe. And another is to have a team that includes people who better understand rarer-in-this-space perspectives, which diverge earlier on (or people who are by default inclined to thinking about this stuff in ways that are different from others’ defaults), as this could help us start noticing assumptions we didn’t even realize we were making, translate between frames, etc.
So maybe my view is that (1) there were more ~independent worldview formation/ exploration going on, and that (2) the (soft) deferral that is happening (because some deferral feels basically inevitable) were less overlapping.
(I expect we don’t really disagree, but still hope this helps to clarify things. And also, people at Forethought might still disagree with me.)
In particular:
If this perspective involves a strong belief that AI will not change the world much, then IMO that’s just one of the (few?) things that are ~fully out of scope for Forethought. I.e. my guess is that projects with that as a foundational assumption wouldn’t really make much sense to do here. (Although IMO even if, say, I believed that this conclusion was likely right, I might nevertheless be a good fit for Forethought if I were willing to view my work as a bet on the worlds in which AI is transformative.)
But I don’t really remember what the “AI as normal..” position is, and could imagine that it’s somewhat different — e.g. more in the direction of “automation is the wrong frame for understanding the most likely scenarios” / something like this. In that case my take would be that someone exploring this at Forethought could make sense (haven’t thought about this one much), and generally being willing to consider this perspective at least seems good, but I’d still be less excited about people who’d come with the explicit goal of pursuing that worldview & no intention of updating or whatever.
--
(Obviously if the “AI will not be a big deal” view is correct, I’d want us to be able to come to that conclusion—and change Forethught’s mission or something. So I wouldn’t e.g. avoid interacting with this view or its proponents, and agree that e.g. inviting people with this POV as visitors could be great.)