I downvoted this forum post because I think the quoted part of the text, while obviously informal, is an annoying strawman of criticisms EA faced and represents an attitude towards critique that I think is quite counterproductive. I think the rest of the linked post is significantly better though, and agree with the general point.
CEvans
Thanks a lot for posting this! I really enjoyed reading it and the linked google document—would anyone in the EA Philippines team be interested in a short meeting with me about this? I currently run EA Oxford and have some specific questions.
Thanks for the thoughtful comment Amber! I appreciate the honesty in saying both that you think people should think more about prioritisation and that you haven’t always yourself. I have definitely been like this at times and I think it is good/important to be able to say both statements together. I would be happy/interested to talk through your thinking about prioritisation if you wanted. I have some other accounts of people finding me helpful to talk to about that kind of thing as it happens frequently in my community building work.
Re. (1), I agree that not everyone can be in the heavy tail of the community distribution, but I don’t think there’s strong reasons to think that people can’t reach their “personal heavy tail” of their career options as per the graph. Ie. they might not all be able to have exceptional impact on a scale relative to the world/EA population, but they can have exceptional impact relative to different counterfactuals of them, and I think that is something still worth striving for.
For (1) and (2), I guess my model of the job market/impact opportunities is less static than I think your phrasing suggests you think about it. I don’t think I conceive of impact opportunities as being a fixed number of “impactful” jobs at EA orgs that we need to fill, and I think you often don’t need to be super “entrepreneurial” per your words to look beyond this. Perhaps ironically, I think your work is a great example of this (from what I understand). You use your particular writing skills to help other EAs in a way that could plausibly be very impactful, and this isn’t necessarily a niche that would have been filled if you hadn’t taken it. It seems like there are also lots of other career paths (eg. journalism, politics, earn to give etc) which have impact potential probably higher for many people than typical EA orgs, but aren’t necessarily represented in viewing things the way I perceived you to be. Of course there are also different “levels” of being entrepreneurial too which mean you aren’t really directly substituting for someone else even if you aren’t founding your own organisation (such as deciding on a new research agenda, taking a team in a new direction etc).
I think you might have already captured a lot of this with your “failure of imagination...” sentence, but I do think that what I am saying implies that people are capable of finding their path such that they can reach their impact potential. Perhaps some people will be the very best for particular “EA org” jobs, but that doesn’t mean others can’t make very impactful career paths for themselves. I agree that in some cases this might look like contributing to the EA ecosystem and using particular skills to be a multiplier on others doing work you think is really important, but I don’t think it is a binary between this and working in a key role at an “EA org”.
Taking prioritisation within ‘EA’ seriously
Perhaps another consideration against is that it seems potentially bad to me for any one person to be the primary mediator for the EA community. There are some worlds where this position is subtly very influential. I dont think I would want a single person/worldview to have that, in order to avoid systematic mistakes/biases. To be clear, this is not intended as a personal comment—I have no context on you besides this post.
I am excited about having better community mediation though. Perhaps you coordinating a group/arrangement with external people could be a great idea.
Also I think this kind of post about personal career plans with detailed considerations is great so thanks for writing it.
Thanks David that all makes sense. Perhaps my comment was poorly phrased but I didn’t mean to argue for caring about infohazards per se, but was curious for opinions on it as a consideration (mainly poking to build my/others’understanding of the space ). I agree that imposing ignorance on affected groups is bad by default.
Do you think the point I made below in this thread regarding pressure from third party states is important? Your point “it doesn’t matter to them whether it also devastates agriculture in Africa or Australia” doesn’t seem obviously true at least considering indirect effects. Presumably, it would matter a lot to Australia/African countries/most third party states, and they might apply relevant political pressure. It doesn’t seem obvious that this would be strategically irrelevant in most nuclear scenarios.
Even if there is some increased risk, I feel it is a confusing question about how this trades off with being honest/having academic integrity. Perhaps the outside view (in almost all other contexts I can think of, researchers being honest with governments seems good -perhaps the more relevant class is military related research which feels less obvious) dominates here enough to follow the general principles.
Thanks for the reply and link to the study—I feel quite surprised by how minor the effect of impact awareness is but I suppose nuclear war feels quite salient for most people. I wonder if this could be some kind of metric used for evaluating the baseline awareness of a danger (ie. I would be very interested to see the same study applied to pandemics, AI, animals etc)
Re. The effects on government decision making, I think I agree intuitively that governments are sufficiently scope insensitive (and self interested in nuclear war circumstances?) that it would not make a big difference necessarily to their own view.
However, it seems plausible to me that a global meme of “any large-scale nuclear war might kill billions globally” might mean that there is far greater pressure from third party states to avoid a full nuclear exchange. I might try thinking more about this and write something up, but it does seem like having that situation could make a country far less likely to use them.
Obviously nuclear exchanges are not ideal for third parties even with no climate effect, and I feel unsure how much of a difference this might make. It also doesn’t seem like the meme is currently sufficiently strong as to affect government stances on nuclear war, although that is a reasonably uninformed perspective.
Thanks for writing this—it seems very relevant for thinking about prioritization and more complex X-risk scenarios.
I haven’t engaged enough to have a particular object-level take, but was wondering if you /others had a take on whether we should consider this kind of conclusion somewhat infohazardous? Ie. Should we be making this research public if it at all increases the chance that nuclear war happens?
This feels like a messy thing to engage with, and I suppose it depends on beliefs around honesty and trust in governments to make the right call with fuller information (of course there might be some situations where initating a nuclear war is good).
Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas. To respond to your points:
1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that >95% of EAs’ lives don’t look like some extreme naive optimiser per your framing.
I think I see EA more as “how can we do the most amount of good you can do with X resources”, where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with “EAs should”, I generally interpret that as “If you are wanting to have more impact, then you should”. I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community.
EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc—an example (read the comments)). Even if you are a “hard-core utilitarian”, then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community—while I don’t deny that there are also mental health issues within the group, I think EA as a movement thus far hasn’t caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.
On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you’re subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren’t successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the “good” ones, and so it is worth taking a maximisation mindset to get the most value.
I don’t think saying “the world is a bad place” is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn’t care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.
3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.
I think your points regarding the insignificance of today’s events for humanity’s long-term seem to rely heavily on a view of non path dependency—my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here: Hinge of history—EA Forum (effectivealtruism.org).
4. I agree it is generally a confusing topic and don’t have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely.
Thanks for writing this, I found it helpful for understanding the biosecurity space better!
I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.
I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which people in biosecurity are actually worried about, making it extremely difficult to evaluate. An example of this kind of thinking is David Thorstad’s post on overestimating risks from biohazards which I thought was somewhat disappointing epistemically: https://ineffectivealtruismblog.com/2023/07/08/exaggerating-the-risks-part-9-biorisk-grounds-for-doubt/.
I suppose the options for managing this situation are:
-
Encourage deference to the field that biosecurity is worth working on relative to other EA areas.
-
Create some kind of resource which isn’t an infohazard in itself, but would be able to make a good case of biosecurity’s importance by perhaps gesturing at some credible threat models.
-
Permit the status quo which seems to probably lead to an underprioritisation of biosecurity.
2 seems best if it is at all feasible, but am unsure what to do between 1 and 3.
-
What’s the kind of information you mean by semi-objective? Something comparable to this for instance? Nuclear Threat Initiative’s Global Biological Policy and Programs (founderspledge.com) (particularly the “why we recommend them” section)
I think it could be bad if it relies too much on a particular worldview for its conclusions, which causes people to unnecessarily anchor on it. Seems like it could also be bad from a certain perspective if you think that it could lead to preferred treatment for longtermist causes which are easier to evaluate (eg. climate change relative to AI safety).
Nice post—I think I agree that Ben’s argument isn’t particularly sound.
Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like “One large driver of humanity’s moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others’ suffering without undermining themselves”. This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering.
I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse.
From a brief glance, it does appear that Founders Pledge’s work is far more analogous to typical longtermist EA grantmaking than Givewell. Ie. it relies primarily on heuristics like organiser track record and higher-level reasoning about plans.
Thanks for the comment Jeff! I admit that I didn’t have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/governance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF for example. I am not denying that this gives you some indication of the progress you are making, but it does become very difficult to precisely evaluate the impact of the work and make comparisons.
To establish a relationship between “How well can we identify existing pathogens in sequencing data?”, identifying future pandemics earlier, and reducing catastrophic/existential risk from pandemics, you have to make a significant number of assumptions/guesses which are far more difficult to get feedback on. To give a few examples:
- How likely is the next catastrophic pandemic to be from an existing pathogen?
- How likely is it that marginal improvements to the identification process are going to counterfactually identify a catastrophic threat?
- For the set of pathogens that could cause an existential/catastrophic threat, how much does early identification reduce the risk by?
- How much is this risk reduction in absolute terms? (Or a different angle, assuming you have an answer to the previous question: What are the chances of an existential/catastrophic pandemic this century?)
These are the types of question that you need to address to actually draw a line to anything that cashes out to a number, and my uninformed guess is that there is substantial disagreement about the answers. So while you may get clear feedback on a particular sub question, it is very difficult to get feedback on how much this is actually pushing on the thing you care about. So while perhaps you can compare projects within a narrow subfield (eg. improving identification of existing pathogens), it is easy to then lose track of the bigger picture which is what really matters.
To be clear, I am not at all saying that this doesn’t make the work worth doing, it does just make me pessimistic about the utility of attempting to make precise quantifications.
Thanks for the detailed response! Your examples were helpful to illustrate your general thinking, and I did update slightly towards thinking some version of this could work, but I am still getting stuck on a few points:
Re. the GHD comparison: firstly to clarify, I meant “quality of reasoning” primarily in terms of the stated theory of change rather than a much more difficult to assess general statement. I would expect the quality of reasoning around a ToC to quite strongly correlate with expected impact. Of course this might not always cash out in actual impact, but this doesn’t necessarily feel relevant for funding longtermist projects due to the inability to get feedback on actual impact. I think most longtermist work focuses on wicked problems, and this makes even progress of existing projects also not necessarily a good proxy for overall success.
For your 2 suggestions of methodology, it seems like (2) would be very useful to donors but would be very costly in expert time and not obviously worth it to me (although I’d be keen to try a small test-run and see) for the marginal gains compared to a grantmakers’ decision.
For method (1), I think that quantification is most useful for clarifying your own intuitions and allowing for some comparison within your own models. So I am certainly pro grantmakers doing their own quick evaluations, but I am not sure how useful it would be as a charity evaluator. I think you still have such irreducibly huge uncertainty bars on some of the key statements you need to get there (especially when you consider counterfactuals), that a final quantification of impact for a longtermist charity is just quite misleading for less well-informed donors.
For example, I’m not sure what a statement like “alignment being solved is 50% of what is necessary for an existential win” means exactly, but I think it does illustrate how messy this is. Does this mean it reduces AI X-risk by half this century? Increases chance of existential security by 50% (any effect on this seems to change an evaluation by orders of magnitude)? I am guessing it means it is 50% of the total work needed to reduce AI risk to ~0, but it seems awfully unclear how to quantify this as there must be some complex distribution of overall risk reduction depending on the amount of other progress made rather than a binary, which feels very hard to quantify. Thus I agree with claim(a), but am skeptical of our ability to make progress in a reasonable space of time for b.
One thing that I would be excited about is more explicit statements by longtermist charities themselves detailing their own BOTECs along the lines of what you are talking about, justifying from their perspective why their project is worth funding. This allows you to clearly understand their worldview, the assumptions they are making, and what a “win” would look like for them, which allows you to make your own evaluation. I think it would be great to make reasoning more explicit and allow for more comparison probably within the AI safety community, but it feels unlikely to be useful for non extremely well-informed donors.
Whats the version/route to value of this that you are excited about? I feel quite skeptical anything like this could work (see my answer on this post) but would be eager for people to change my mind.
I am surprised no one has directly made the obvious point of there being no concrete feedback loops in longtermist work, which means that it would be very messy to try and compare. While some people have tried to get at the cost effectiveness of X-risk reduction, it is essentially impossible to be objective in evaluating how much a given charity has actually reduced X-risk. Perhaps there is something about creating clear proxies which allows for better comparison, but I am guessing that there would still be major disagreements over what could be best that are unresolvable.
Any evaluation would have to be somewhat subjective and would smuggle in a lot of assumptions about their worldview. I think you really can’t do better than trying to evaluate people’s track records and the quality of their higher level reasoning, which is essentially the meaning of grantmakers’ statements like “just trust us”.
Perhaps there could be something like a site which aggregates the opinions of relevant experts on something like the above and explains their reasoning publically, but I doubt this is what you mean and I am not sure this is a project worth doing.
I think this post is interesting, while being quite unsure what my actual take is on the correctness of this updated version. I think I am worried about community epistemics in this world where we encourage people to defer on what the most important thing is.
It seems like there are a bunch of other plausible candidates for where the best marginal value add is even if you buy AI X- risk arguments eg. S risks, animal welfare, digital sentience, space governance etc. I am excited about most young EAs thinking about these issues for themselves.
How much do you weight the outside view consideration here of you suggesting a large shift in the EA community resource allocation, and then changing your mind a year later, which indicates the exact kind of uncertainty which motivates more diverse portfolios?
I think your point of people underrating problem importance relative to personal fit on the current margin seems true though and tangentially, my guess is the overall EA cause portfolio (both for financial and human capital allocation) is too large.
Hmm I’d have thought that most EA orgs pay significantly better than the rest of the charity sector, and are competitive with mid-high paying private sector roles?
I’m pretty confident this is true at a junior level, but is perhaps less so for more senior roles.