Thanks for pointing to these! I had forgotten about them or hadn’t seen them in the first place — all are very relevant.
Anonymous_EA
[Question] Are too many young, highly-engaged longtermist EAs doing movement-building?
What We Owe the Future is an NYT bestseller
Selfish Reasons to Move to DC
[Question] Why is Operations no longer an 80K Priority Path?
Much of the value of intro calls comes from making further introductions
Really glad to hear that! Stereotypes often have a grain of truth and I don’t want to sugarcoat things with my post (the comments on this post are definitely worth reading to get a more complete picture). But if you’d be open to a move I really encourage planning a visit: even just a well-planned weekend with lots of one-on-ones could give you a lot of information.
Feel free to DM me if you decide to come for a visit and I might be able to help with making connections.
Huh, I’m surprised you’re planning to do further degrees after the Schwarzman: that undercuts my point above. If the Schwarzman isn’t viewed by employers as a terminal degree, then I’d view that as a major downside of the program. The opportunity cost of a year of full-time work is high.
Thanks for this post! Schwarzman seems especially promising for folks interested in policy, where a grad degree is often needed and where China expertise is valued.
I think it’s worth emphasizing that these degrees only take one year. This is a BIG advantage relative to e.g. law school, an MBA, and even many/most MPP programs. If you think education (particularly non-STEM grad school) is mostly about signaling rather than learning, then the opportunity cost of an extra one or two years of schooling is really significant. Schwarzman looks like a great way to get a shiny grad credential in a very reasonable amount of time.
Hi, Vilhelm, thanks for these thoughts! Some quick responses to just a few points:
Fwiw, in Sweden, my 50% confidence interval of the share of highly-engaged longtermists under 25 doing movement-building is 20-35%. However, I don’t think I am as concerned as you seem to be with that number.
20-35% isn’t all that concerning to me. I’d be more concerned if it were in the ballpark of 40% or more. That said, even 20-35% does feel a bit high to me if we’re talking about college graduates working full-time on community-building (a higher percentage might make sense if we’re counting college students who are just spending a fraction of their time on community-building).
my experience as a community builder in Sweden trying to help young longtermsist is that there aren’t that many better opportunities out there right now. (Note that this might be very different in other contexts.)
Agreed that the counterfactual may be significantly worse for those based in Sweden (or most other countries besides the US and UK) who are unwilling to move to EA hubs. I should have flagged that I’m writing this as someone based in the US where I see lots of alternatives to community building. With that said, it’s not totally clear to me which direction this points in: maybe a lack of opportunities to do object-level work in Sweden suggests the need for more people to go out and create such opportunities, rather than doing further community-building.
Data suggest people leave their community building roles rather quickly, indicating that people do pivot when finding a better fit
Yeah this matches my experience—I see a lot of young EAs doing community building for a year or two post-grad and then moving on to object-level work. This seems great when it’s a case of someone thinking community-building is their highest-upside option, testing their fit, and then moving on (presumably because it hasn’t gone super well). I worry, though, that in some cases folks do not even view community-building as a career path they’re committed to, and instead fall into community-building because it’s the “path of least resistance.”
To be clear, I’m incredibly grateful to community builders like you, and don’t intend to devalue the work you do—I genuinely think community-building is one of the most impactful career paths, and a significant fraction of EAs should pursue it (particularly those who—like you, it sounds like—have great personal fit for the work and see it their highest-upside long-term career path).
Great post!
From Scenario 1, in which alignment is easy:
Here you seem to be imagining that technical AI alignment turns out to be easy, but you don’t discuss the political/governance problem of making sure the AI (or AIs) are aligned with the right goals.
E.g. what if the first aligned transformative AI systems are built by bad actors? What if they’re built by well-intentioned actors who nevertheless have no idea what to do with the aligned TAI(s) they’ve developed? (My impression is that we don’t currently have much idea of what a lab should be looking to do in the case where they succeed in technical alignment. Maybe the aligned system could help them decide what to do, but I’m pretty nervous about counting on that.)
From my perspective a full success story should include answers to these questions.