Benjamin was a research analyst at 80,000 Hours. Before joining 80,000 Hours, he worked for the UK Government and did some economics and physics research.
Benjamin Hilton
Thanks Vasco! I’m working on a longer article on exactly this question (how pressing is nuclear risk). I’m not quite sure what I’ll end up concluding yet, but your work is a really helpful input.
Totally agree! Indeed, there’s a classic 80k article about this.
When working out your next steps, we tend to recommend working forwards from what you know, and working backwards from where you might want to end up (see our article on finding your next career steps). We also think people should explore more with their careers (see our article on career exploration).If there are areas where we’re giving the opposite message, I’d love to know – shoot me an email or DM?
Hi Remmelt,
Thanks for sharing your concerns, both with us privately and here on the forum. These are tricky issues and we expect people to disagree about how to about how to weigh all the considerations — so it’s really good to have open conversations about them.
Ultimately, we disagree with you that it’s net harmful to do technical safety research at AGI labs. In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board.
We argue for this position extensively in my article on the topic (and we only list roles consistent with the considerations in that article).
Some other things we’ve published on this topic in the last year or so:
We recently released a podcast episode with Nathan Labenz on some of the controversy around OpenAI, including his concerns about some of their past safety practices, whether ChatGPT’s release was good or bad, and why its mission of developing AGI may be too risky.
Benjamin
- 19 Jul 2024 14:13 UTC; 12 points) 's comment on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) by (
- 19 Jul 2024 14:44 UTC; 8 points) 's comment on 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) by (
Most of our advice on actually having an impact — rather than building career capital — is highly relevant to mid-career professionals. That’s because they’re entering their third career stage (https://80000hours.org/career-guide/career-planning/#three-career-stages), i.e. actually trying to have an impact. When you’re mid-career, it’s much more important to appropriately:
Pick a problem
Find a cost-effective way of solving that problem that fits your skills
Avoid doing harm
So we hope mid-career people can get a lot out of reading our articles. I’d probably in particular suggest reading our advanced series (https://80000hours.org/advanced-series/).
By “engagement time” I mean exactly “time spent on the website”.
Thanks for this comment Tyler!
To clarify what I mean by unknown unknowns, here’s a climate-related example: We’re uncertain about the strength of various feedback loops, like how much warming could be produced by cloud feedbacks. We’d then classify “cloud feedbacks” as a known unknown. But we’re also uncertain about whether there are feedback loops we haven’t identified. Since we don’t know what these might be, these loops are unknown unknowns. As you say, the known feedback loops don’t seem likely to warm earth enough to cause a complete destruction of civilisation, which means that if climate change were to lead to civilisational collapse, that would probably be because of something we failed to consider.
But here’s the thing: generally we do know something about unknown unknowns.[1] In the case of these unknown feedback loops, we can place some constraints on them. For example:
They couldn’t cool the Earth past absolute zero, because that’s pretty much impossible.[2]
They almost certainly couldn’t make the earth hotter than the Sun (because at some point the Earth would start forming a fusing ball of plasma, and the Earth isn’t heavy enough to be hotter than the sun if it turned into a star).
In fact, we can gather a broad variety of evidence about these unknown unknowns, using various different lines of evidence. These lines of evidence include:
The physics constraining possible feedback processes
The historical climate record (since 1800)
The paleoclimate record (millions of years into the past)
Accounting for these multiple lines of evidence is exactly what the 6th Assessment Report attempts to do when calculating climate sensitivity (how much Earth’s surface will cool or warm after a specified factor causes a change in its climate system):[3]
In AR6 [the 6th Assessment report], the assessments of ECS [equilibrium climate sensitivity] and TCR [transient climate response] are made based on multiple lines of evidence, with ESMs [earth system models] representing only one of several sources of information. The constraints on these climate metrics are based on radiative forcing and climate feedbacks assessed from process understanding (Section 7.5.1), climate change and variability seen within the instrumental record (Section 7.5.2), paleoclimate evidence (Section 7.5.3), emergent constraints (Section 7.5.4), and a synthesis of all lines of evidence (Section 7.5.5). In AR5 [the 5th assessment report], these lines of evidence were not explicitly combined in the assessment of climate sensitivity, but as demonstrated by Sherwood et al. (2020) their combination narrows the uncertainty ranges of ECS compared to that assessed in AR5.
That is, as I mentioned in the main post “the IPCC’s Sixth Assessment Report… attempts to account for structural uncertainty and unknown unknowns. Roughly, they find it’s unlikely that all the various lines of evidence are biased in just one direction — for every consideration that could increase warming, there are also considerations that could decrease it.”
As a result, even when accounting for unknown unknowns, it looks extremely unlikely that anthropogenic warming could heat the earth enough to cause complete civilisational collapse (for a discussion of how hot that would need to be, see the first section of the main post!).
If you’re interested in diving into this further, I’d suggest taking a look at the original paper “An Assessment of Earth’s Climate Sensitivity Using Multiple Lines of Evidence” by Sherwood et al., or Why low-end ‘climate sensitivity’ can now be ruled out, a popular summary by the paper’s authors.
- ^
It’s of course true that there are some kinds of unknown unknowns that are impossible to account for — that is, things about which we have no information. But these are rarely particularly important unknown unknowns, in part because of that lack of information: in order to have no information about something, we necessarily can’t have any evidence for its existence, so from the perspective of Occam’s razor, they’re inherently unlikely.
- ^
At least, in macroscopic systems. You can have negative absolute temperatures in systems with a population inversion (like a laser while it’s lasing), although these systems are generally considered thermodynamically hotter than positive-temperature systems (because heat flows from the negative temperature system to the positive temperature system).
- ^
From the introduction to section 7.5 of the Working Group I contribution to the Sixth Assessment Report (p.993).
I don’t currently have a confident view on this beyond “We’re really not sure. It seems like OpenAI, Google DeepMind, and Anthropic are currently taking existential risk more seriously than other labs.”
But I agree that if we could reach a confident position here (or even just a confident list of considerations), that would be useful for people — so thanks, this is a helpful suggestion!
Thanks, this is an interesting heuristic, but I think I don’t find it as valuable as you do.
First, while I do think it’d probably be harmful in expectation to work at leading oil companies / at the Manhattan project, I’m not confident in that view — I just haven’t thought about this very much.
Second, I think that AI labs are in a pretty different reference class from oil companies and the development of nuclear weapons.
Why? Roughly:
Whether, in a broad sense, capabilities advances are good or bad is pretty unclear. (Note some capabilities advances in particular areas are very clearly harmful.) In comparison, I do think that, in a broad sense, the development of nuclear weapons, and the release of greenhouse gases are harmful.
Unlike with oil companies and the Manhattan Project, I think that there’s a good chance that a leading, careful AI project could be a huge force for good, substantially reducing existential risk — and so it seems weird not to consider working at what could be one of the world’s most (positively) impactful organisations. Of course, you should also consider the chance that the organisation could be one of the world’s most negatively impactful organisations.
Because these issues are difficult and we don’t think we have all the answers, I also published a range of opinions about a related question in our anonymous advice series. Some of the respondents took a very sceptical view of any work that advances capabilities, but others disagreed.
Hi Yonatan,
I think that for many people (but not everyone) and for many roles they might work in (but not all roles), this is a reasonable plan.
Most importantly, I think it’s true that working at a top AI lab as an engineer is one of the best ways to build technical skills (see the section above on “it’s often excellent career capital”).
I’m more sceptical about the ability to push towards safe decisions (see the section above on “you may be able to help labs reduce risks”).
The right answer here depends a lot on the specific role. I think it’s important to remember than not all AI capabilities work is necessarily harmful (see the section above on “you might advance AI capabilities, which could be (really) harmful”), and that top AI labs could be some of the most positive-impact organisations in the world (see the section above on “labs could be a huge force for good—or harm”). On the other hand, there are roles that seem harmful to me (see “how can you mitigate the downsides of this option”).
I’m not sure of the relevance of “having a good understanding of how to do alignment” to your question. I’d guess that lots of knowing “how to do alignment” is being very good at ML engineering or ML research in general, and that working at a top AI lab is one of the best ways to learn those skills.
The Portuguese version at 80000horas.com.br is a project of Altruísmo Eficaz Brasil. We often give people permission to translate our content when they ask—but as to when, that would be up to Altruísmo Eficaz Brasil! Sorry I can’t give you a more concrete answer.
(Personal views, not representing 80k)
My basic answer is “yes”.
Longer version:
I think this depends what you mean.
By “longtermism”, I mean the idea that improving the long-run future is a key moral priority. By “longtermist” I mean someone who personally identifies with belief in longtermism.
I think x-risks are the most pressing problems from a cause-neutral perspective (although I’m not confident about this, there are a number of plausible alternatives, including factory farming).
I think longtermism is also (approximately) true from a cause neutral perspective (I’m also not confident about this).
The implication between these two beliefs could go either way, depending on how you structure the argument. You could first argue that x-risks are pressing, which in turn implies that protecting the long-run future is a priority. Or you could argue the other way, that improving the long-run future is important and reducing x-risks are a tractable way of doing so.
Most importantly though, I think you can believe that x-risks are the most pressing issue, and indeed believe that improving the long-run future is a key moral priority of our time, without identifying as a “longtermist”.
Indeed, I think that there’s sufficient objectivity in the normative claims underlying the pressing-ness of x-risks that, according to my current meta-ethical and empirical beliefs, I just believe it’s true that x-risks are the most pressing problems (again, I’m not hugely confident in this claim). The truth of this statement is independent of the identity of the actor, hence my answer “yes”.
Caveat:
If, by your question, you mean “Do you think working on x-risks is the best thing to do for non-longtermists?” the answer is “sometimes, but often no”. This is because a problem being pressing on average doesn’t imply that all work on that problem is equally valuable: personal fit, and the choice of intervention both play an important role. I’d guess that it would be best for someone with lots of experience working on a particularly cost-effective animal welfare intervention to work on that intervention rather than move into x-risks.
Thank you so much for this feedback! I’m sorry to hear our messaging has been discouraging. I want to be very clear that I think it’s harmful to discourage people from working on such important issues, and would like to minimise the extent to which we do that.
I wrote the newsletter you’re referencing, so I particularly wanted to reply to this. I also wrote the 80,000 Hours article on climate change, explaining our view that it’s less pressing than our highest priority areas.
I don’t consider myself fundamentally a longtermist. Instead, I try my best to be impartial and cause-neutral. I try to find the ways in which I can best help others – including others in future generations, and animals, as I think they are moral patients.
Here are some specifically relevant things that I currently believe:
Existential risks are the most pressing problems we currently face (where by pressing I mean some combination of importance, which is determined in part by the expected number of individuals that could be affected, tractability, and neglectedness).
Climate change is less pressing than some other existential risks.
Cost-effectiveness is heavy-tailed. By trying to find the very best things to work on, you can substantially increase your impact.
It’s tractable to convince people to work on very important issues. It’s similarly tractable to convince people to work on existential risks as on other very important issues.
Therefore, it’s good to convince people to work on very important problems, but even better to convince people to work on existential risks.
I wrote that existential risks are the biggest problems we face, and that climate change is less pressing than other existential risks because I believe these things are both true and that communicating them is a highly cost-effective way to do good.
I don’t think everyone should work on existential risk reduction – personal fit is really important, and if so many people worked on it that they became very non-neglected, I’d think it was less useful for more people to work on them at the margin. Partly for these reasons, 80,000 Hours has generally promoted a range of areas – and has some positive evidence of people being convinced to work on poverty reduction and animal welfare as a result of 80,000 Hours.
On the newsletter audience
The 80,000 Hours newsletter is sent to a large audience who are largely unfamiliar with effective altruism. So that’s why the newsletter spoke about the importance of poverty reduction and animal welfare. For example, I wrote that “my best guess is that the negative effects of factory farming alone make the world worse than it’s ever been.” It would be brilliant if that newsletter convinced people to work on poverty reduction and animal welfare.
The newsletter also explained that, as far as we can tell, there are even bigger problems than these two.
I think it’s unlikely that the 80,000 Hours newsletter on net discouraged work on poverty reduction or animal welfare primarily because the vast majority (>99%) of newsletter subscribers aren’t working on any of poverty reduction, animal welfare or existential risk reduction.
If it did convince someone with equally good personal fit to work on existential risk reduction when they would have otherwise have worked on poverty reduction or animal welfare, that would be worse than convincing someone who wouldn’t have otherwise done anything very useful. However, since I think existential risks are the most pressing issues, I don’t think it’d be doing net expected harm,.
On whether I / 80,000 Hours value(s) work on non-existential threats
We value people working on animal welfare and poverty reduction (as well as other causes that aren’t our top priorities) a lot. We just don’t think those issues are the very most pressing problems in the world.
For example, where we list factory farming and global health on the problem profiles page you cite, we say:
We’d also love to see more people working on the following issues, even though given our worldview and our understanding of the individual issues, we’d guess many of our readers could do even more good by focusing on the problems listed above.
Factory farming and global health are common focuses in the effective altruism community. These are important issues on which we could make a lot more progress.
It’s genuinely really difficult to send the message that something seems more pressing than other things, without implying that the other things are not important or that we wouldn’t want to see more people working on them. My colleague Arden, who wrote those two paragraphs above, also feels this way, and had this in mind when she wrote them.
On whether I / 80,000 Hours should defer more
One thing to consider is whether, given that many people disagree with 80,000 Hours on the relative importance of existential risks, we should lower our ranking.
I agree with this idea. Our ranking is post-deferral – we still think that existential risks seem more pressing than other issues, even after deferral. We have had conversations within the last year about whether, for example, factory farming should be included in our list of the top problems, and decided against making that change (for now), based on our assessment of its neglectedness and relative importance.
I also think that saying what we believe (all things considered) to be true is a good heuristic for deciding what to say. This is what the newsletter and problem profiles page try to do.
My personal current guess is that existential risk reduction is something like 100x more important than factory farming, and is also more neglected (although less tractable).
Because of our fundamental cause-neutrality, this is something that could (and hopefully will) change, for example if existential risks become less neglected, or the magnitude of these risks decrease.
Finally, on climate change
As I mentioned above, I think climate change is likely less important than other existential risks. Saying climate change is less pressing than the world’s literal biggest problem is a far cry from “unimportant” – I think that climate change is a hugely important problem. It just seems less likely to cause an existential catastrophe, and is far less neglected, than other possible risks (like nuclear-, bio-, or AI-related risks). My article on climate change defends this at length, and I’ve also responded to critiques of that article on the forum, e.g here.
- Maybe longtermism isn’t for everyone by 10 Feb 2023 16:48 UTC; 39 points) (
- 12 Feb 2023 23:09 UTC; 2 points) 's comment on Maybe longtermism isn’t for everyone by (
There are important reasons to think that the change by the EA community is within the measurement error of these surveys, which makes this less noteworthy.
(Like say you put +/- 10 years and +/- 10% on all these answers—note there are loads of reasons why you wouldn’t actually assess the uncertainty like this, (e.g. probabilities can’t go below 0 or above 1), but just to get a feel for the uncertainty this helps. Well, then you get something like:
10%-30% chance of TAI by 2026-2046
40%-60% by 2050-2070
and 75%-95% by 2100
Then many many EA timelines and shifts in EA timelines fall within those errors.)
Reasons why these surveys have huge error
1. Low response rates.
The response rates were really quite low.
2. Low response rates + selection biases + not knowing the direction of those biases
The surveys plausibly had a bunch of selection biases in various directions.
This means you need a higher sample to converge on the population means, so the surveys probably aren’t representative. But we’re much less certain in which direction they’re biased.
Quoting me:
For example, you might think researchers who go to the top AI conferences are more likely to be optimistic about AI, because they have been selected to think that AI research is doing good. Alternatively, you might think that researchers who are already concerned about AI are more likely to respond to a survey asking about these concerns
3. Other problems, like inconsistent answers in the survey itself
AI impacts wrote some interesting caveats here, including:
Asking people about specific jobs massively changes HLMI forecasts. When we asked some people when AI would be able to do several specific human occupations, and then all human occupations (presumably a subset of all tasks), they gave very much later timelines than when we just asked about HLMI straight out. For people asked to give probabilities for certain years, the difference was a factor of a thousand twenty years out! (10% vs. 0.01%) For people asked to give years for certain probabilities, the normal way of asking put 50% chance 40 years out, while the ‘occupations framing’ put it 90 years out. (These are all based on straightforward medians, not the complicated stuff in the paper.)
People consistently give later forecasts if you ask them for the probability in N years instead of the year that the probability is M. We saw this in the straightforward HLMI question, and most of the tasks and occupations, and also in most of these things when we tested them on mturk people earlier. For HLMI for instance, if you ask when there will be a 50% chance of HLMI you get a median answer of 40 years, yet if you ask what the probability of HLMI is in 40 years, you get a median answer of 30%.
The 80k podcast on the 2016 survey goes into this too.
Thanks for this! Looks like we actually roughly agree overall :)
Thanks for this thoughtful post! I think I stand by my 1 in 10,000 estimate despite this.
A few short reasons:
Broad things: First, these scenarios and scenarios like them are highly conjunctive (many rare things need to happen), which makes any one scenario unlikely (although of course there may be many such scenarios). Second, I think these and similar scenarios are reason to think there may be a large catastrophe, but large and existential are a long way apart. (I discuss this a bit here but don’t come to a strong overall conclusion. More work on this would be great.)
On inducing nuclear war: My estimate of the direct risk of nuclear war is 1 in 10,000, and the indirect risk is 1 in 1,000. It seems like the chances that climate change causes a nuclear war, weighted by the extent to which the war was more likely by virtue of climate change and not e.g. geopolitical tensions unrelated to climate change is, while subjective and difficult to judge, probably much less than 10%. If it’s say 1%, this gives less than 1 in 100,000 indirect x-risk from climate change. This seems a bit small, but consistent with my 1 in 10,000 estimate. Note this includes inducing nuclear war from ways other than crop failure.
On runaway warming: My understanding is that the main limit here is how many fossil fuels it’s possible to recover from the ground—see more here. Even taking into account uncertainty and huge model error, it seems highly unlikely that we’ll end up with runaway warming that itself leads to extinction. I’d also add that lots of the reduction in risk occurs because climate change is a gradual catastrophe (unlike a pandemic or nuclear war), which means that, for example, we may find other emissionless technology (e.g. nuclear fusion) or get over our fear of nuclear fission, etc., reducing the risk of resource depletion. Relatedly, unless there is extremely fast runaway warming over only a few years, the gradual nature of climate change increases the chances of successful adaptation to a warmer environment. (Again, I mean adaptation to prevent an existential catastrophe—a large catastrophe that isn’t quite existential seems far far more likely.)
On coastal cities: I’d guess the existential risk from war breaking out between great powers is also around 1 in 10,000 (within an order of magnitude or so), although I’ve thought about this less. So again, while cyanobacteria blooms sounds like a not-impossible way in which climate change could lead to war (personally I’d be more worried about flooding and migration crises in South Asia), I think this is all consistent with my 1 in 10,000 estimate.
If it helps at all, my subjective estimate of the risk from AI is probably around 1%, and approximately none of that comes from worrying about killer nanobots. I wrote about what an AI-caused existential catastrophe might actually look like here.
Hi! Wanted to follow up as the author of the 80k software engineering career review, as I don’t think this gives an accurate impression. A few things to say:
I try to have unusually high standards for explaining why I believe the things I write, so I really appreciate people pushing on issues like this.
At the time, when you responded to <the Anthropic person>, you said “I think <the Anthropic person> is probably right” (although you added “I don’t think it’s a good idea to take this sort of claim on trust for important career prioritisation research”).
When I leave claims like this unsourced, it’s usually because I (and my editors) think they’re fairly weak claims, and/or they lack a clear source to reference. That is, the claim is effectively is a piece of research based on general knowledge (e.g. I wouldn’t source the claim “Biden is the President of the USA”) and/or interviews with a range of experts, and the claim is weak or unimportant enough not to investigate further. (FWIW I think it’s likely I should have prioritised writing a longer footnote on why I believe this claim.)
The closest data is the three surveys of NeurIPS researchers, but these are imperfect. They ask how long it will take until there is “human-level machine intelligence”. The median expert asked thought there was an around 1 in 4 chance of this by 2036. Of course, it’s not clear that HLMI and transformative AI are the same thing, or that thinking HLMI being developed soon necessarily means that HLMI will be made by scaling and adapting existing ML methods. In addition, no survey data pre-dates 2016, so it’s hard to say that these views have changed based solely on survey data. (I’ve written more about these surveys and their limitations here, with lots of detail in footnotes; and I discuss the timelines parts of those surveys in the second paragraph here.)
As a result, when I made this claim I was relying on three things. First, that there are likely correlations that make the survey data relevant (i.e., that many people answering the survey think that HLMI will be relatively similar to or cause transformative AI, and that many people answering the survey think that if HLMI is developed soon that suggests it will be ML-based). Second, that people did not think that ML could produce HLMI in the past (e.g. because other approaches like symbolic AI were still being worked on, because texts like Superintelligence do not focus on ML and this was not widely remarked upon at the time despite that book’s popularity, etc.). Third, that people in the AI and ML fields who I spoke to had a reasonable idea of what other experts used to think and how that has changed (note I spoke to many more people than the one person who responded to you in the comments on my piece)!
It’s true that there may be selection bias on this third point. I’m definitely concerned about selection bias for shorter timelines in general in the community, and plan to publish something about this at some point. But in general I think that the best way, as an outsider, to understand what prevailing opinions are in a field, is to talk to people in that field – rather than relying on your own ability to figure out trends across many papers, many of which are difficult to evaluate, many of which may not replicate. I also think that asking about what others in the field think, rather than what the people you’re talking to think, is a decent (if imperfect) way of dealing with that bias.
Overall, I thought the claim I made was weak enough (e.g. “many experts” not “most experts” or “all experts”) that I didn’t feel the need to evaluate this further.
It’s likely, given you’ve raised this, that I should have put this all in a footnote. The only reason I didn’t is that I try to prioritise, and I thought this claim was weak enough to not need much substantiation. I may go back and change that now (depending on how I prioritise this against other work).
This looks really cool, thanks Tom!
I haven’t read the report in full (just the short summary) - but I have some initial scepticism, and I’d love to answers to some of the following questions, so I can figure out how much evidence this report is on takeoff speeds. I’ve put the questions roughly in order of subjective importance to my ability to update:Did you consider Baumol effects, the possibility of technological deflation, and the possibility of technological unemployment, how they affect the profit incentive as tasks are increasingly automated? [My guess is that this effect of all of these is to slow takeoff down, so I’d guess a report that uses simpler models will be noticeably overestimating takeoff speeds.]
How much does this rely on the accuracy of semi-endogenous growth models? Does this model rely on exponential population growth? [I’m asking because as far as I can tell, work relying on semi-endogenous growth models should be pretty weak evidence. First, the “semi” in semi-endogenous growth usually refers to exogenous exponential population growth, which seems unlikely to be a valid assumption. Second, endogenous growth theory has very limited empirical evidence in favour of it (e.g. 1, 2) and I have the impression that this is true for semi-endogenous growth models too. This wouldn’t necessarily be a problem in other fields, but in general I think that economic models with little empirical evidence behind them provide only very weak evidence overall.]
In section 8, the only uncertainty pointing in favour of fast takeoff is “there might be a discontinuous jump in AI capabilities”. Does this mean that, if you don’t think a discontinuous jump in AI capabilities is likely, you should expect slower take-off than your model suggests? How substantial is this effect?
How did you model the AI production function? Relatedly, how did you model constraints like energy costs, data costs, semiconductor costs, silicon costs etc.? [My thoughts: looks like you roughly used a task-based CES model, which seems like a decent choice to me, knowing not much about this! But I’d be curious about the extent to which using this changed your results from Cobb-Douglas.]
I’m vaguely worried that the report proves too much, in that I’d guess that the basic automation of the industrial revolution also automated maybe 70%+ of tasks by pre-industrial revolution GDP. (Of course, generally automation itself wasn’t automated—so I’d be curious on your thoughts about the extent to which this criticism applies at least to the human investment parts of the report.)
That’s all the thoughts that jumped into my head when I read the summary and skimmed the report—sorry if they’re all super obvious if I’d read it more thoroughly! Again, super excited to see models with this level of detail, thanks so much!
I agree with (a). I disagree that (b) is true! And as a result I disagree that existing CEAs give you an accurate signpost.
Why is (b) untrue? Well, we do have some information about the future, so it seems extremely unlikely that you won’t be able to have any indication as to the sign of your actions, if you do (a) reasonably well.
Again, I don’t purely mean this from an extreme longtermist perspective (although I would certainly be interested in longtermist analyses given my personal ethics). For example, simply thinking about population changes in the above report would be one way to move in this direction. Other possibilities include thinking about the effects of GHW interventions on long-term trajectories, like growth in developing countries (and that these effects may dominate short-term effects like DALYs averted for the very best interventions). I haven’t thought much about what other things you’d want to measure to make these estimates, but I would love to see someone try, and it seems pretty crucial if you’re going to be doing accurate CEAs.
Sure, happy to chat about this!
Roughly I think that you are currently not really calculating cost-effectiveness. That is, whether you’re giving out malaria nets or preventing nuclear war, almost all of the effects of your actions will be affecting people in the future.
To clarify, by “future” I don’t necessarily mean “long run future”. Where you put that bar is a fascinating question. But focusing on current lives lost seems to approximately ignore most of the (positive or negative) value, so I expect your estimates to not be capturing much about what matters.
(You’ve probably seen this talk by Greaves, but flagging it in case you haven’t! Sam isn’t a huge fan, I think in part because Greaves reinvents a bunch of stuff that non-philosophers have already thought a bunch about, but I think it’s a good intro to the problem overall anyway.)
Thank you so much for spotting this! It seems like both your points are correct.
To explain where these mistakes came from:
I think visceral gout can be caused by infectious disease, but it can also be caused by other factors, such as poor nutrition (see, e.g. this post from a hen breeding company), so it’s not correct to classify as an infectious disease. The referenced article in footnote 13 investigated the frequency of various diseases in chicken farms in Bangladesh, and found that visceral gout was the most common identified disease (but they correctly do not say it was the most common identified infectious disease).
Vomiting is a symptom of coccidiosis in other animals, (e.g. dogs) but as you say, not in chickens; chickens cannot vomit. I must have looked up the symptoms independently from the frequency data.
I couldn’t find any previous collation of evidence about how animals are treated that I trusted, so this took a lot of research. (Most claims on this topic on the internet are unreferenced claims on websites with a clear agenda: either animal advocacy or the farming industry.) As a result, I don’t doubt there are further mistakes in that section, but hopefully none that detract from the underlying point.
(I no longer work at 80,000 Hours but I’ll ask them to fix this on the website.)