Thanks for your thoughts. I’m afraid I won’t be able to address everything, but I wanted to share a few considerations.
There were a few points here I particularly liked:
People should be thinking about the impact they can have in their career over the period of decades, rather than just the next year or so. This seems really useful to highlight, because it’s pretty difficult to keep in mind, particularly early on in your career.
We need to avoid a sense in the community that ‘direct work’ means ‘work in EA organisations’: the vast majority of the most impactful roles in the world are outside EA organisations—whether in government, academia, non-profits or companies.
The paths to these roles are very often going to be long, and involve building up skills, credibility/credentials and a network.
I agree that the phrase ‘skill bottleneck’ might fail to adequately capture resources like credentials and networks but we think that these forms of career capital are as important as specific skills. However, we think that they are most useful when they are reasonably relevant to a priority path. For example, we think Jason Matheny’s career capital is so valuable largely because his network and credentials were in national security, intelligence, U.S. policy, and emerging technology—areas we think are some of the most relevant to our priority problems. If he had worked at a management consulting firm or in corporate law he would still have acquired generally impressive networks and prestige but couldn’t have founded CSET.
There are a few things I disagree with:
You seem to be fairly positive about pretty broad capital building (eg working at McKinsey). While we used to recommend working in consulting early in people’s careers, we’ve updated pretty substantially away from that in favour of taking a more directed approach to your career. The idea is to try to find the specific area you think you think is most suited to you and where you’ll have the most impact, and then to try out roles directly relevant to that. That’s not to say, of course, that it will be clear what type of role you should pursue, but rather that it seems worth thinking about which types of role seem best suited to you, and then trying out things of that type. Often, people who are able to acquire prestigious generalist jobs (like McKinsey) are able to acquire more useful targeted jobs that would be nearly as good of a credential. For example, if you think you might be interested going into policy, it is probably better to take a job at a top think tank (especially if you can do work on a topic that’s relevant to one of our priority problem such as national security or emerging technology policy) than to do something like management consulting. The former has nearly as much general prestige, but has much more information value to help you decide whether to pursue policy, and will allow you to build up a network, knowledge (including tacit knowledge), and skills which are more relevant to roles in priority areas that you might aim for later in your career. One heuristic we sometimes use to compare the career capital of two opportunities is to ask in which option you’d expect your career to be more advanced in a priority path 5-10 years down the line. It’s sometimes the case that spending years getting broad career capital and then shifting into a relevant area will progress you faster than acquiring more targeted career capital but in our experience, narrow career capital wins out more often.
I agree that it’s really important for people to find jobs that truly interest them and which they can excel at. Having said that, I’m not that keen on the advice to start your career decision with what most fascinates you. Personally, I haven’t found it obvious what I’ll find interesting until I try it, which makes the advice not that action guiding. More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs. That makes me think it’s better to approach career decisions by first thinking through what problems in the world you think most need solving and what the biggest bottlenecks to them being solved are, followed by which of those tasks seem interesting and appealing to you, rather than starting with the question of which jobs seem most interesting and appealing.
I’m a little worried that people will take away the message from your piece that they shouldn’t apply to EA organisations early in their careers, or should turn down a job there if offered one. Like I said—the vast majority of the highest impact roles will be outside EA organisations, and of course there’ll be many people who are better suited to work elsewhere. But it still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.
I think the thing to bear in mind is that it’s important not only to apply for jobs at EA organisations. The total number of jobs advertised at EA organisations at any one time is only small, and new graduates should expect to apply to tens of jobs before getting one. Typically, the cost of applying to a valuable direct work job is fairly small relative to the benefit if it turns out you learn that you’re already in a position to start making large contributions to a priority area, as long as you’re at the same time applying to jobs that would help you generate career capital.
Unfortunately, as you say, it seems very difficult to convey accurate impressions—whether about how hard it is to get into various areas, or what kind of skill bottlenecks we currently think there are. I think this is in part due to people having such different starting points. I both come across people who had the impression that it was easy to get into AI safety or EA organisations and then struggled to do so, and people who thought it was so competitive there was no point in them even trying who (when strongly encouraged to do so) ended up excelling. We’re hoping that focusing more on the long-form material like the podcast will help to get a more nuanced picture across for people coming from different starting points.
One other thing that I just noticed: looking at the list of 80k’s 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.
I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.
One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew—which we’d like to work towards reducing.
Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.
Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your “word of mouth is more skewed toward jobs at EA org than advice in 80K articles” conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I’m grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were ‘in the core of the EA community’ (e.g. working at an EA org themselves vs. a PhD student who’s very into EA but living outside of an EA hub) favored me working at EA organizations. It’s interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people’s actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).
Anyway, here are a few other hypotheses of what might contribute to a skew toward ‘EA jobs’ that’s stronger than what 80K literally recommends:
Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
Convenience: If you’re the kind of person who naturally hears about, say, the Open Phil RA job posting, it’s quite convenient to actually apply there. It costs time, but for many people ‘just time’ as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I’m a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in ‘unfamiliar terrain’. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the ‘familiar’ EA space. I generally felt a bit that given that there’s so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly “go into AI strategy” (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it’s not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there’s something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
Advice for ‘EA jobs’ is more unequivocal, see this comment.
Hi Michelle, thanks for the thoughtful reply; I’ve responded below. Please don’t feel obliged to respond in detail to my specific points if that’s not a good use of your time; writing up a more general explanation of 80k’s position might be more useful?
You’re right that I’m positive about pretty broad capital building, but I’m not sure we disagree that much here. On a scale of breadth to narrowness of career capital, consulting is at one extreme because it’s so generalist, and the other extreme is working at EA organisations or directly on EA causes straight out of university.I’m arguing against the current skew towards the latter extreme, but I’m not arguing that the former extreme is ideal. I think something like working at a top think tank (your example above) is a great first career step. (As a side note, I mention consulting twice in my post, but both times just as an illustrative example. Since this seems to have been misleading, I’ll change one of those mentions to think tanks).
However, I do think that there are only a small number of jobs which are as good on so many axes as top think tanks, and it’s usually quite difficult to get them as a new grad. Most new grads therefore face harsher tradeoffs between generality and narrowness.
More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs.
I guess my core argument is that in the past, EA has overfit to the jobs we thought were important at the time, both because of explicit career advice and because of implicit social pressure. So how do we avoid doing so going forward? I argue that given the social pressure which pushes people towards wanting to have a few very specific careers, it’s better to have a community default which encourages people towards a broader range of jobs, for three reasons: to ameliorate the existing social bias, to allow a wider range of people to feel like they belong in EA, and to add a little bit of “epistemic modesty”-based deference towards existing non-EA career advice. I claim that if EA as a movement had been more epistemically modest about careers 5 years ago, we’d have a) more people with useful general career capital, b) more people in things which didn’t use to be priorities, but now are, like politics, c) fewer current grads who (mistakenly/unsuccessfully) prioritised their career search specifically towards EA orgs, and maybe d) more information about a broader range of careers from people pursuing those paths. There would also have been costs to adding this epistemic modesty, of course, and I don’t have a strong opinion on whether the costs outweight the benefits, but I do think it’s worth making a case for those benefits.
We’ve updated pretty substantially away from that in favour of taking a more directed approach to your career
Looking at this post on how you’ve changed your mind, I’m not strongly convinced by the reasons you cited. Summarised:
1. If you’re focused on our top problem areas, narrow career capital in those areas is usually more useful than flexible career capital.
Unless it turns out that there’s a better form of narrow career which it would be useful to be able to shift towards (e.g. shifts in EA ideas, or unexpected doors opening as you get more senior).
2. You can get good career capital in positions with high immediate impact
I’ve argued that immediate impact is usually a fairly unimportant metric which is outweighed by the impact later on in your career.
3. Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.
I am personally not very convinced by this, but I appreciate that there’s a broad range of opinions and so it’s a reasonable concern.
It still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.
Re OpenPhil and GiveWell wanting to hire new grads: in general I don’t place much weight on evidence of the form “organisation x thinks their own work is unusually impactful and worth the counterfactual tradeoffs”.
I agree that you have a very difficult job in trying to convey key ideas to people who are are coming from totally different positions in terms of background knowledge and experience with EA. My advice is primarily aimed at people who are already committed EAs, and who are subject to the social dynamics I discuss above—hence why this is a “community” post. I think you do amazing work in introducing a wider audience to EA ideas, especially with nuance via the podcast as you mentioned.
[I work for 80,000 Hours]
Thanks for your thoughts. I’m afraid I won’t be able to address everything, but I wanted to share a few considerations.
There were a few points here I particularly liked:
People should be thinking about the impact they can have in their career over the period of decades, rather than just the next year or so. This seems really useful to highlight, because it’s pretty difficult to keep in mind, particularly early on in your career.
We need to avoid a sense in the community that ‘direct work’ means ‘work in EA organisations’: the vast majority of the most impactful roles in the world are outside EA organisations—whether in government, academia, non-profits or companies.
The paths to these roles are very often going to be long, and involve building up skills, credibility/credentials and a network.
I agree that the phrase ‘skill bottleneck’ might fail to adequately capture resources like credentials and networks but we think that these forms of career capital are as important as specific skills. However, we think that they are most useful when they are reasonably relevant to a priority path. For example, we think Jason Matheny’s career capital is so valuable largely because his network and credentials were in national security, intelligence, U.S. policy, and emerging technology—areas we think are some of the most relevant to our priority problems. If he had worked at a management consulting firm or in corporate law he would still have acquired generally impressive networks and prestige but couldn’t have founded CSET.
There are a few things I disagree with:
You seem to be fairly positive about pretty broad capital building (eg working at McKinsey). While we used to recommend working in consulting early in people’s careers, we’ve updated pretty substantially away from that in favour of taking a more directed approach to your career. The idea is to try to find the specific area you think you think is most suited to you and where you’ll have the most impact, and then to try out roles directly relevant to that. That’s not to say, of course, that it will be clear what type of role you should pursue, but rather that it seems worth thinking about which types of role seem best suited to you, and then trying out things of that type. Often, people who are able to acquire prestigious generalist jobs (like McKinsey) are able to acquire more useful targeted jobs that would be nearly as good of a credential. For example, if you think you might be interested going into policy, it is probably better to take a job at a top think tank (especially if you can do work on a topic that’s relevant to one of our priority problem such as national security or emerging technology policy) than to do something like management consulting. The former has nearly as much general prestige, but has much more information value to help you decide whether to pursue policy, and will allow you to build up a network, knowledge (including tacit knowledge), and skills which are more relevant to roles in priority areas that you might aim for later in your career. One heuristic we sometimes use to compare the career capital of two opportunities is to ask in which option you’d expect your career to be more advanced in a priority path 5-10 years down the line. It’s sometimes the case that spending years getting broad career capital and then shifting into a relevant area will progress you faster than acquiring more targeted career capital but in our experience, narrow career capital wins out more often.
I agree that it’s really important for people to find jobs that truly interest them and which they can excel at. Having said that, I’m not that keen on the advice to start your career decision with what most fascinates you. Personally, I haven’t found it obvious what I’ll find interesting until I try it, which makes the advice not that action guiding. More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs. That makes me think it’s better to approach career decisions by first thinking through what problems in the world you think most need solving and what the biggest bottlenecks to them being solved are, followed by which of those tasks seem interesting and appealing to you, rather than starting with the question of which jobs seem most interesting and appealing.
I’m a little worried that people will take away the message from your piece that they shouldn’t apply to EA organisations early in their careers, or should turn down a job there if offered one. Like I said—the vast majority of the highest impact roles will be outside EA organisations, and of course there’ll be many people who are better suited to work elsewhere. But it still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.
I think the thing to bear in mind is that it’s important not only to apply for jobs at EA organisations. The total number of jobs advertised at EA organisations at any one time is only small, and new graduates should expect to apply to tens of jobs before getting one. Typically, the cost of applying to a valuable direct work job is fairly small relative to the benefit if it turns out you learn that you’re already in a position to start making large contributions to a priority area, as long as you’re at the same time applying to jobs that would help you generate career capital.
Unfortunately, as you say, it seems very difficult to convey accurate impressions—whether about how hard it is to get into various areas, or what kind of skill bottlenecks we currently think there are. I think this is in part due to people having such different starting points. I both come across people who had the impression that it was easy to get into AI safety or EA organisations and then struggled to do so, and people who thought it was so competitive there was no point in them even trying who (when strongly encouraged to do so) ended up excelling. We’re hoping that focusing more on the long-form material like the podcast will help to get a more nuanced picture across for people coming from different starting points.
One other thing that I just noticed: looking at the list of 80k’s 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.
I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.
One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew—which we’d like to work towards reducing.
Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.
Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your “word of mouth is more skewed toward jobs at EA org than advice in 80K articles” conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I’m grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were ‘in the core of the EA community’ (e.g. working at an EA org themselves vs. a PhD student who’s very into EA but living outside of an EA hub) favored me working at EA organizations. It’s interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people’s actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).
Anyway, here are a few other hypotheses of what might contribute to a skew toward ‘EA jobs’ that’s stronger than what 80K literally recommends:
Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
Convenience: If you’re the kind of person who naturally hears about, say, the Open Phil RA job posting, it’s quite convenient to actually apply there. It costs time, but for many people ‘just time’ as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I’m a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in ‘unfamiliar terrain’. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the ‘familiar’ EA space. I generally felt a bit that given that there’s so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly “go into AI strategy” (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it’s not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there’s something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
Advice for ‘EA jobs’ is more unequivocal, see this comment.
Hi Michelle, thanks for the thoughtful reply; I’ve responded below. Please don’t feel obliged to respond in detail to my specific points if that’s not a good use of your time; writing up a more general explanation of 80k’s position might be more useful?
You’re right that I’m positive about pretty broad capital building, but I’m not sure we disagree that much here. On a scale of breadth to narrowness of career capital, consulting is at one extreme because it’s so generalist, and the other extreme is working at EA organisations or directly on EA causes straight out of university. I’m arguing against the current skew towards the latter extreme, but I’m not arguing that the former extreme is ideal. I think something like working at a top think tank (your example above) is a great first career step. (As a side note, I mention consulting twice in my post, but both times just as an illustrative example. Since this seems to have been misleading, I’ll change one of those mentions to think tanks).
However, I do think that there are only a small number of jobs which are as good on so many axes as top think tanks, and it’s usually quite difficult to get them as a new grad. Most new grads therefore face harsher tradeoffs between generality and narrowness.
I guess my core argument is that in the past, EA has overfit to the jobs we thought were important at the time, both because of explicit career advice and because of implicit social pressure. So how do we avoid doing so going forward? I argue that given the social pressure which pushes people towards wanting to have a few very specific careers, it’s better to have a community default which encourages people towards a broader range of jobs, for three reasons: to ameliorate the existing social bias, to allow a wider range of people to feel like they belong in EA, and to add a little bit of “epistemic modesty”-based deference towards existing non-EA career advice. I claim that if EA as a movement had been more epistemically modest about careers 5 years ago, we’d have a) more people with useful general career capital, b) more people in things which didn’t use to be priorities, but now are, like politics, c) fewer current grads who (mistakenly/unsuccessfully) prioritised their career search specifically towards EA orgs, and maybe d) more information about a broader range of careers from people pursuing those paths. There would also have been costs to adding this epistemic modesty, of course, and I don’t have a strong opinion on whether the costs outweight the benefits, but I do think it’s worth making a case for those benefits.
Looking at this post on how you’ve changed your mind, I’m not strongly convinced by the reasons you cited. Summarised:
Unless it turns out that there’s a better form of narrow career which it would be useful to be able to shift towards (e.g. shifts in EA ideas, or unexpected doors opening as you get more senior).
I’ve argued that immediate impact is usually a fairly unimportant metric which is outweighed by the impact later on in your career.
I am personally not very convinced by this, but I appreciate that there’s a broad range of opinions and so it’s a reasonable concern.
Re OpenPhil and GiveWell wanting to hire new grads: in general I don’t place much weight on evidence of the form “organisation x thinks their own work is unusually impactful and worth the counterfactual tradeoffs”.
I agree that you have a very difficult job in trying to convey key ideas to people who are are coming from totally different positions in terms of background knowledge and experience with EA. My advice is primarily aimed at people who are already committed EAs, and who are subject to the social dynamics I discuss above—hence why this is a “community” post. I think you do amazing work in introducing a wider audience to EA ideas, especially with nuance via the podcast as you mentioned.
Could you add a tl;dr?
(I couldn’t deal with the wall of text, but seems like there’s probably a lot of good points here.)