Also, the (normative, rather than instrumental) arguments for democratisation in political theory are very often based on the idea that states coerce or subjugate their members, and so the only way to justify (or eliminate) this coercion is through something like consent or agreement. Here we find ourselves in quite a radically different situation.
tylermjohn
This is false. Jacy was accused of sexual harassment at Brown, never sexual assault. Some members of this community have conflated Jacy’s case with the case of another student, which for some reason shows up in google searches for Jacy’s name. This is an understandable confusion, but it is a very bad confusion to continue spreading.
Much as I am sympathetic to many of the points in this post, I don’t understand the purpose of the section, “Can you demand ten billion dollars?”. As I understand the proposal to democratise EA it’s just that: a proposal about what, morally, EA ought to do. It certainly doesn’t follow that any particular person or group should try to enforce that norm. So pointing out that it would be a bad idea to try to use force to establish this is not a meaningful criticism of the proposal.
Strong agree. All of the evidence cited in this post is about philosopher-bioethicists, and my experience working in bioethics (including at the NIH Department of Bioethics) says that philosopher-bioethicsts are much more progressive than bioethicists with a health background. And unfortunately, bioethicists with a health background have much stronger ties to the medical community and health care policy. One major piece of evidence for this is that none of the “bioethicists” mentioned in this post (other than Art Caplan) are members of the American Society of Bioethics and Humanities, the main professional organisation in bioethics which “represents nearly 1,800 physicians, nurses, social workers, members of the clergy, educators, researchers, and other healthcare professionals interested in the specialty of bioethics and the health humanities.” (Evidence: I know most of them personally, have been to the ASBH conference three times, have a strong sense of who is there + what the conversations are like.) My experience attending the ASBH conference three times in the past suggests that most members of the ASBH see the philosophers mentioned as excessively radical, and they’re routinely ignored by the core bioethics community.
I think it could make sense in various instances to form a trade agreement between people earning and people doing direct work, where the latter group has additional control over how resources are spent.
It could also make sense to act like that trade agreement which was not in fact made was in fact made, if that incentivises people to do useful direct work.
But if this trade has never in fact transpired, explicitly or tacitly, I see no sense in which these resources “are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities.”
Hi Spencer and Amber,
There’s a pretty chunky literature on some of these issues in metaethics, e.g.:
Moral fictionalism, or why it could make sense to talk in terms of moral truths even if there aren’t any
Moral antirealism/constructivism, or why there can be moral “shoulds” and “oughts” even if these are just mental attitudes
Why even if you’re a pluralist, utilitarian considerations dominate your reasoning on a range of psychologically typical value systems given how much welfare matters to people compared to other things, and how much we can affect it given effective altruism
How there can be different ways of valuing things, some that you endorse and some that you don’t (especially among constructivists like Street, Korsgaard, and Velleman), and why it could make sense to only act on values you endorse acting on
Relatedly, how your moral theory might be different from your spontaneous sentiments because you can think through these sentiments and bring them into harmony (e.g. the discussion of reflective equilibrium)
Obviously it would be a high bar to have to have a PhD level training on these topics and read through the whole literature before posting on the EA Forum, so I’m not suggesting that! But I think it would be useful to talk some of these ideas through with antirealist metaethicists because they have responses to a bunch of these criticisms. I know Spencer and I chatted about this once and probably we should chat again! I could also refer you to some other EA folks who would be good to chat to about this, probably over DM.
All of that said, I do think there are useful things about what you’re doing here, especially e.g. part 2, and I do think that some antirealist utilitarianism is mistaken for broadly the reasons you say! And the philosophers definitely haven’t gotten everything right, I actually think most metaethicists are confused. But especially claims like those made in part 3 have a lot of good responses in existing discussion.
ETA: Actually if you’re still in NYC one person I’ll nominate to chat with about this topic is Jeff Sebo.
That argument would be seen as too weak in the political theory context. Then powerful states would have to enfranchise everyone in the world and form a global democracy. It also is too strong in this context, since it implies global democratic control of EA funds, not community control.
Thanks for posting this here as well as Jess’s excellent questions! This seems like a nice place to continue the conversation around the paper, so I’ll respond to what I take to be the most pertinent issues in the blog post here. As Jess notes, this is a relatively early attempt to formulate these ideas and the literature on longtermist institutional reform is extremely young, so the more conversation the better.
How will (short-term) vested interests try to capture these in-government research groups, and how will that be prevented? Why is this better done within the government rather than done in academia using grants from the government or philanthropists?
Most governments are swamped with expertise. It’s not that they have too little of it, but that they are overwhelmed with it, can’t absorb it, and don’t know who to turn to as a reliable source of information. Governments need one or a small body of epistemically reliable and nonpartisan research groups that they can turn to which fill the function of synthesizing extant research into consumable reports for government. These research groups in turn need to have strong working relationships and good lines of communication with government. If an academic or privately-funded research institute could play that role, that would be fine, but it’s harder to see how this would be possible, and in-government research groups and advisory boards have a good track record of playing this sort of role. (We use the OTA as one prominent example, but there are many others on smaller scale.) One additional benefit of research institutes that are set up by government is that when the government is perceived as legitimate, these institutes will also be seen as legitimate and reliable sources of information. It would be valuable for the described research institutes to have public legitimacy, so that if their publicly disseminated research were ignored by government this fact could precipitate public censure.
If public censure isn’t enough to command the attention of government to the research, then a research institute with government authority could also have the “put-it-in-their-face-power” we suggest in the paper, forcing reading and a response by government.
Short-term interest capture is an important worry, and we see this already in privately-funded research groups as well as in academia. One mechanism we propose in the paper for preventing capture by interest groups and industry is to have researchers selected by professional associations or by lot. If the research body is large enough and its key members and leadership are shuffled frequently enough, this should prevent a great deal of corruption. But of course, we are open to other ideas depending on the additional concerns that arise.
What will incentivize the citizen assembly to actually benefit future citizens? Merely because they are “explicitly tasked with the sole mandate”, with no enforcement or feedback?
The citizens’ assembly proposed doesn’t have a strong mechanism for amplifying the concern of assembly members for future people. It is assumed that they already have some interest in doing this, as roughly all people do. The role of the citizens’ assembly isn’t to amplify personal motivation, but rather to i) reduce election and funding incentives that disincentivize the electorate from focusing on the long-term, ii) reduce the deleterious effects of polarization on long-term deliberation, and iii) create designated agenda time for long-term issues. All of these sources of short-termism hamper governmental motivation to focus on the long-term, so we should expect the citizens’ assembly to be much more motivated to benefit future generations than existing government organs. The motivation comes from the citizens themselves, but it has far fewer obstacles to overcome than the motivation of the electorate.
That said, the literature on assemblies does suggest that participation in assemblies decreases citizen political apathy and increases empathy between deliberation participants, so there could be some salutary motivational effects of citizens’ assemblies that we haven’t considered here. Moreover, political decisions tend to operate with 2-5 year timelines, and the assembly members will in general live for much longer than this. Given that the citizens’ assembly will be deliberative and better-informed than the general public, it is possible that it will function more rationally, seeking to promote the diverse interests of the diverse group of people within the assembly across their lifespans, rather than over the next 2-5 years, and this would significantly decrease short-termism. But this is rather speculative, and the central purpose of the assembly is not to increase this kind of motivation.
Does thinking that the citizen assembly would be effective imply that most government assemblies should be selected by sortition (which, right or wrong, has deployed pretty rarely worldwide)? Or is there something about the future and/or soft-power that makes sortition particularly well suited for this body? (Personally, I like sortition as a governing mechanism in general, but if we can’t get hardly anyone to use it generally, why might they here?)
Sortition has perhaps been deployed less rarely than you think! There have been at least 120 citizens’ assemblies and citizen juries deployed worldwide, and sortition is regularly used for the selection of court juries. But it’s true that they’ve rarely been used for the selection of long-lasting government positions.
The role of the citizens’ assembly I mentioned above, I think, shows why sortition should be especially helpful here: it removes perverse election incentives to attend to the short-term, and it also reduces the effect of partisan forces, decreasing polarization. These seem especially important when considering long-term issues where our situation is epistemically precarious, but you’re right to point out that they are generally very important. I am personally quite open to the idea that a very large proportion of political leaders should be selected randomly. My own dissertation supervisor, Alex Guerrero, is writing an excellent book defending this idea at this very moment.
On why we might be able to get government to use it here: citizens’ assemblies have a relatively strong tradition of use for gathering information on the informed views of citizens, and have in the last decade become increasingly popular. As above, I would advocate for greater experimentation with sortition, but they have most popularly been used in citizens’ assemblies that are similar to that which we describe, and we expect it to continue to be popular in these institutions.
Will prosperity impact statements obviously improve the long-term future more than it will be used to block/delay projects for near-term reasons? Certainly, environmental impact statements suffer from this problem, and EIS have the advantage that at least there is often some way to objectively check whether they were right or wrong in a reasonable amount of time.
This is the issue raised in the blog post that I find trickiest. It’s certainly true that EIAs have frequently been used to block and delay projects on spurious grounds, and the point here that PIAs are less epistemically tractable is spot-on and important. One advantage of PIAs in the legislature is that many more resources can be put to ensuring that they are objective and accurate than can be put into, say, local jurisdictions, given the much greater resources of the federal government and the fewer number of items requiring assessment. An idea we considered but didn’t include here is that an independent, non-partisan body such as the in-government research institutions we defend could perform the impact assessments, taking them out of the hands of politicians who might use them for more obstructionist ends. But I remain quite uncertain on the best mechanism for ensuring that PIAs fulfill their information-gathering and soft censure functions rather than becoming used primarily to fuel partisan obstructionism, and I’d certainly be interested in other ideas.
Thanks for doing this! Though it seems like you kinda buried the lede. Why isn’t this in the top level summary?
In expectation, THL is >100x better than AMF
In the median scenario, THL is about 2-4x more cost-effective than AMF
A 71% chance that THL is more cost-effective than AMF
Hi Tobias,
I’m glad to see CRS take something of an interest in this topic and I’m particularly happy to see some meta-level discussion of representing the interests of future generations which has been sorely missing from the longtermism space.
We are in full agreement that most extant proposals to represent future generations involve very weak institutions and often rely on tenuous political commitments. In fact, it’s because political commitments are so tenuous that political institutions to represent future generations must at first be weak. Strong institutions for future generations have historically been repealed very rapidly, as Jones, O’Brien, and Ryan (2018) have argued from a couple case studies.
We are also in full agreement that there are problems of predicting the interests of future generations, and that getting more objective information about their interests is a key problem. This problem proliferates with increasingly longer timescales. This is why many of the solutions I am personally most favorable to are information interventions, such as creating research bodies like the now-defunct Office of Technology Assessment, which can distill and package extant expertise for legislative bodies, as well as posterity impact assessments, which can create strong incentives to gather more information about the future.
I find much less compelling the idea that “if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so,” and “if people do not care about the long-term future,” they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?
The main long-term function that I see longtermist institutional reform, or any other kind of institutional reform playing is an institutional signalling role. There is compelling evidence that legal and political reform significantly shifts the norms and attitudes that people come to see as acceptable (Berkowitz and Walker 1967, Bilz and Nadler 2009, Flores and Barclay 2015, Tankard and Paluck 2016, 2017, Walker and Argyle 1964). Shifting laws and institutional norms credibly signals information about group attitudes to anyone who has access to information about those laws and norms. In this case, it signals that good, sensible, right-thinking people think that future generations are of great importance and that our political systems must be responsive to their interests. For this reason, there is a chicken and egg problem for institutional reform, but this chicken and egg problem is very friendly to supporters of institutional reform. Reforming institutions changes attitudes, which in turn creates the political will necessary to reform institutions further. Reformed institutions in turn create stable shelling points that prevent value drift away from core values.
For this reason, longtermist institutional reform is quite beneficial for information-gathering purposes. Representing future generations creates greater political and cultural will to gather objective information about the interests of future generations. It’s an exercise in movement-building.
I don’t know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism. In the latter case, it’s worth noting that there are a large variety of likely causes of short-termism. Some of them are epistemic (we don’t know what to do) and motivational (we lack the political will), but others are merely institutional. In these latter cases, the problem is not that we don’t have enough information or will, but rather that the right information is not getting to the right people or that institutional mechanisms are preventing appropriately-motivated and informed actors from acting for the long term. These sorts of problems sometimes require different fixes, and they can sometimes be fixed simply by creating designated stakeholders who create relevant coordination points in government and have time allocated explicitly to considering the long-term. Political problems are often a problem of institutional incentives rather than of political will, and there are currently very strong incentives to focus on the short-term. I canvass many of the various causes of political short-termism in my (now rather lengthy) review on longtermist institutional design and policy.
As a classical utilitarian, I’m also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.
In short, I think there is reason for more optimism about longtermist institutional reform than you express here, but I am happy to have some further discussion of the problem and to see a call to consider more seriously the epistemic problems that plague such reform along with some possible solutions.
Thanks! I appreciate your wariness of overemphasizing precise numbers and I agree that it is important to hedge your estimates in this way.
However, none of the claims in the bullet you cite give us any indication of the expected value of each intervention. For two interventions A and B, all of the following is consistent with the expected value of A being astronomically higher than the expected value of B:
B is better than A in most of the most plausible scenarios
On most models the difference in cost-effectiveness is small (within 1 or 2 orders of magnitude)
One could reasonably believe that B is better than A or that B is better than A
Extremely little information is communicated about the relative expected value of A and B by the above points, and what information is communicated misleadingly suggests that both interventions are quite close in expected value. Because EAs are concerned with the expected value of interventions, I think you ought to communicate more about the relative expected value of the interventions and frame your summary of the interventions in a way that is less likely to mislead people about the relative expected value of each intervention.
I think the ideally informative way to both communicate the relative expected value of the interventions and hedge on your model uncertainty in the summary is to (1) provide your expected value estimate, (2) explain that you have high model uncertainty and one could arrive at a different expected value estimate with different assumptions, and (3) invite participants to adjust the Guesstimate and generate their own predictions.
Noting that I think that making substantive public comments on this draft (including positive comments about what it gets right) is one of the very best volunteer opportunities for EAs right now! I plan to send a comment on the draft before the deadline of 6 June.
I’d love to hear what you think we’d be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we’d be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we’re doing. Would you just expect a bigger focus on investment? I’m not sure I would, given how much EA is poised to grow and how comparably little we’ve spent so far. (Cf. Phil Trammell’s disbursement tool https://www.philiptrammell.com/dpptool/)
Excellent. This is a much better idea than the “allow the 2119 people to decide whether to sentence the grandchildren of the 2019 political leaders to the tribunal of death” feedback mechanism that, disturbingly, came to me more readily.
It would be interesting to think about whether there are other feasible ways to see to it that the decisions of future people provide an incentive for the actions for present people.
Two concerns I have with this general kind of scheme is that it requires citizens to have lots of faith that the relevant institutions and the policy will persevere 100 years into the future (the 100 year bond stuff is relevant to this) and that they might not play well with high rates of immigration (since fluidity in polity membership could undermine the efficacy of long-term feedback mechanisms for members of that polity). But these might just be details to be ironed out rather than insolvable problems with the design.
This is great and under-emphasized. I think it was @weeatquince who told me that the primary determinant of what gets implemented by governments is what has successfully been tried before, and while I haven’t seen much empirical data on this it strikes me as plausible.
One counter-point comes from Michael Rose’s book Zukünftige Generationen in der heutigen Demokratie, which finds that low institutional path-dependence (approximated by the rate of recent constitutional changes) had no effect on the institutionalization of powerful proxies for future generations in a (pretty small) fuzzy-set analysis.
On the other hand, former Welsh minister Jane Davidson says that Wales was able to implement their Well-being of Future Generations Act due to the innovativeness of the Welsh government in her new book #FutureGen.
In addition to seeing more EAs get into innovative governments to run policy experiments, it would be great to see further research on policy diffusion and on the importance and proper characterization of governmental innovativeness in the sense you outline here.
Surprising (and confusing!) as it may be, there is some evidence that voters would vote differently with their Demeny vote than with their first vote.
I’ve asked Ben Grodeck (who clued me into Demeny voting) to weigh in with more data, but for now see this study from Japanese economist Reiko Aoki, who found (Table 8 and Figure 7) that the voting preferences of surveyed participants who are permitted to cast one vote on behalf of themselves and one vote for their child sometimes vote differently on their second vote. The effect isn’t drastic, but it is certainly non-trivial.
http://hermes-ir.lib.hit-u.ac.jp/rs/bitstream/10086/22250/1/cis_dp539.pdf
The study authors further find that policy preferences on behalf of oneself and on behalf of one’s children diverge to a greater degree, and the authors hypothesize that we would see more divergence between the multiple votes of Demeny voters if they had different political options that better reflected the divergence between these sets of preferences. Thus, they think that instituting Demeny voting would cause party platforms to change to try to cater to the policy preferences of parents voting on behalf of their children.
Thanks Sam! I don’t have much more to say about this right now since on a couple things we just have different impressions, but I did talk to someone at 80k last night about this. They basically said: some people need the advice Tyler gave, some people need the advice Sam gave. The best general advice is probably “apply broadly”: apply to some EA jobs, to some high-impact jobs outside of EA, to some upskilling jobs, etc. And then pick the highest EV job you were accepted to (where EV is comprehensive and includes things like improvements to your future career from credentialing and upskilling).
More on the question of what best explains these trends:
http://eprints.lse.ac.uk/88702/1/dp1552.pdf
Ahlfeldt et al. analyze 305 Swiss referenda and argue that aging effects swing free from cohort effects and status quo habituation effects. “The evidence, instead, suggests that voters make deliberate choices that maximize their expected utility conditional on their stage in the lifecycle.”
Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this—I was talking in terms of these benefits as “pure” benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly’s piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I’ve pointed out above are “pure” because they come along for free with that labor involved in making the EA community more inclusive, and don’t require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting—even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.
I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you’ve characterized that. I don’t think that we should be a community doing work that fosters “fast progress on the most important things,” because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering “fast progress” on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered—unless we define “fosters fast progress” in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.
To your question as to why “the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans,” I unfortunately haven’t written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don’t think it’s obvious that the EA movement’s contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was “very small” compared to the positive effects we have. It’s something more EAs should treat as non-negligible more often than they do.
Still, here are some of the basic reasons why I think that the EA movement’s contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky—much stickier than norms favoring helping others except in certain particular cases (e.g. within one’s own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I’m out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won’t come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it’s reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction—measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.
I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there’s a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.
Hi readers! I work as a Programme Officer at a longtermist organisation. (These views are my own and don’t represent my employer!) I think there’s some valuable advice in this post, especially about not being constrained too much by what you majored in. But after running several hiring rounds, I would frame my advice a bit differently. Working at a grantmaking organisation did change my views on the value of my time. But I also learned a bunch of other things, like:
The majority of people who apply for EA jobs are not qualified for them.
Junior EA talent is oversupplied, because of management constraints, top of funnel growth, and because EAs really want to work at EA organisations.
The value that you bring to your organisation/to the world is directly proportional to your skills and your fit for the role.
Because of this, typically when I talk to junior EAs my advice is not to apply to lots more EA jobs but rather to find ways of skilling up — especially by working at a non-EA organisation that has excellent managers and invests in training its staff — so that one can build key skills that make one indispensable to EA organisations.
Here’s a probably overly strong way of stating my view that might bring the point home: try to never apply to EA jobs, and instead get so good at something that EA orgs will headhunt you and fight over you.
I know that there are lots of nice things about working at EA organisations (culture, community, tangible feelings of impact) but if you really value work at EA organisations, then you should value highly skilled work at EA organisations even more (I think a lot more!). Having more junior EAs find ways to train up their skills and spend less time looking for EA work is the only way I can see to convert top of funnel community growth into healthy middle of funnel community growth.