I think well-roundedness is maybe unfavorable for some EAs (mainly in academia), but not for a majority of EAs.
My experience from observing some of my most successful friends in non-EA orgs (policy roles, consulting, PE etc.) is that well-roundedness is a good predictor of success. Of course, no scientific proof, but you can imagine that abilities as quickly understanding social norms, showing grit, and manoeuvring in complex social (not purely intellectual) environments help you in those careers. These are things that you (partially) practice and learn in sports, board roles and some type of work you can do in college.
If you believe that you need a group of “insiders” in government and other influential organisations to progress change you should prioritise well-roundedness for a majority as well. I also think the skills above are incredibly useful in order to successfully lobby the organisations we are interested in, something a lot of EA-orgs eventually focus on as well.
Amazing update thanks. Very much interested in your fiscal sponsorship model, is it possible to indicate interest already?
Curious to hear what your current intuitions and latest updates in beliefs are on:
What the most promising / impactful path is for most private sector professionals from those described above
Where HIP could add most value for professionals looking at this map and if you eventually want to focus on a subset of activities
The most pressing bottlenecks that professionals face that prevents them from having / maximising impact
Thanks for this clear write-up and as many others, I definitely share some of your worries. I liked it that you wrote that the extra influx of money could make the CB-position accessible to people from different socioeconomic backgrounds, since this point seems to be a bit neglected in EA discussions.
I think it is true for many other impactful career paths that decent wages and/or some financial security (e.g. smoothening career transitions with stipends) could help to widen the pool of potential applicants, e.g. to more people from less fortunate socioeconomic backgrounds. Don’t forget that many people in the lower and lower-middle income class are raised with the idea that it is important to take care of your own financial security. I have plenty of anecdotes from people in that group that didn’t pursue an EA career in the past, because the wage gap and the worries about financial insecurity were just too large. I see multiple advantages coming from widening the pool to people from lower / lower middle socioeconomic classes:
Given that there is also a lot of talent in lower / lower middle socioeconomic classes, you will finally be able to attract more of them. This will increase the overall talent level in the community.
It could make the EA community less “elitist”, which has many instrumental advantages as well, e.g. on the public perception. In my collaborations with third parties outside of the EA movement, I often receive questions on TFG’s / EA’s stance on Diversity, Equity, and Inclusion. Having a less elitist movement would make it easier to collaborate with parties outside of the movement.
Diversity in terms of backgrounds could lead to a larger diversity of thought and this could potentially help us find new cause areas or improve our understanding of causes like poverty.
Hi Tessa, although biorisks can be included in risks coming from high-priority emerging technologies, we decided for this round to focus on AI / cybersecurity risks for placements and therefore also for our training content.
After the program we will re-evaluate and possibly re-run the program including expansion to other areas (as biorisks). We will announce this on the Forum and feel free to subscribe to our newsletter to receive updates.
Hi Aryeh, really interested in this as well. Can you link me to any literature, experts, videos, software etc. that you deem valuable from DA?
Would be really useful for future training programs from Training For Good!
Wow! Spot-on Adam, I wanted to respond to this question but no need to anymore after reading this
Does membership of a political party increase the odds of landing a traineeship / internship with an MEP or is it even an requirement?
Thanks for this clear write-up. I will include this post in the content of Training For Good’s Impactful Policy Careers workshop. Are you open to 1-on-1s with EAs interested in this career path? Feel free to respond in a pm.
Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.
Awesome topic! Curious to read book reviews from people that read it.
Great idea, at TFG we have similar thoughts and are currently researching the best way to run a program like this. Feel free to PM to provide input.
Hi Chris! We run this on a recurring base with Training For Good! We already had a few dozens of people on the program and we are currently measuring the impact.
Interesting thoughts, apart from the sections finm mentioned this one stood out to me as well:
status engineering—redirecting social status towards productive ends (for instance on Elon Musk making engineers high status)
I think this is something that the EA community is doing already and maybe could/should do even more. Many of my smartest (non EA) friends from college work in rent-seeking sectors / sectors that have neutral production value (not E2G). This seems to be an incredible waste of resources, since they could also work on the most pressing problems.
One interesting question could be: Are there tractable ways to do status engineering with the large group of talented non EAs? I think this could be worthwhile doing, because obviously not all incredibly smart people are part / want to be part of the EA community.
Thanks a lot, this looks like a great resource. This would add a lot of value I think: Properly evaluate existing policy ideas to identify and promote the set of high-quality ideas.
I would be really interesting to see how different experts within the EA community rank the ideas within the same category (e.g. AI) on certain criteria (e.g. impact, tractability, neglectedness, but there are probably better critera). Or enrich this data with the group that is probably most able to push for certain reforms (e.g. American civil servants, people that engage with party politics etc.).
This would make the database actionable to the EA community and thereby even more valuable.
If you believe however that the EU becomes irrelevant at all (argument 5 against), all policy careers for EAs in mainland Europe become quite unappealing suddenly. This makes me think: if you believe the EU market and political environment favor AGI safety (argument 4 in favor), shouldn’t it be a priority for European EAs to keep the EU a relevant political force?
I think there is one argument I really want to back, but I also want to provide a different angle: “Growing the political capital of AGI-concerned people”
I think that even when you think there are substantial odds that the EU doesn’t play an important role in regulation of AGI, having political capital could still be useful for other (tech-related) topics. Quite often I think there is a “halo-effect” related to being perceived as an tech-expert in government. That means that if you are perceived as a tech expert in government because you know a lot about AI, people will also perceive you as an expert on other technologies (where the EU might be more relevant).
This is also one of the reasons that I advise people to work in AI/tech regulation, even when it’s not (solely) on the long term consequences of AI we care about, but e.g. on short term risks or even more on the side of economic stimulus of AI development. Often it will provide EAs with the political capital and credibility to deal with long term / x-risk relevant risks later on when there is an opportunity to switch roles.
Applied for OPP strategy role during Sumer 2021 and received no feedback in first and second test task round. Wasn’t disapointed, because it was well-compensated.
On a different note however: this is one of the largest advantages I see coming from an EA recruitment agency that would be able to give feedback to EA candidates. It feels like quite a miss I didn’t get it, since I have to do very similar work for my own organisation Training For Good. Maybe there is something really obvious I can improve on, but due to the lack of feedback I don’t know what.