I would think the protein substitutes would be a lot cheaper than farmed mice—is that correct? Then it seems like that could substitute for a lot of the non “spectacle of eating live animals” market.
Thanks very much to CEA for doing the recording, editing, and transcription! Also thanks to CEA for the EA grant that has supported some of this work. Mitigating the impact of catastrophes that would disrupt electricity/industry such as solar storm, high altitude electromagnetic pulse, or narrow AI virus, is a parallel effort within The Alliance to Feed the Earth in Disasters (ALLFED) that I did not get a chance to talk about in my 80,000 Hours interview. The Guesstimate model I referred to in the workshop can be found here (blank to avoid anchoring). Also, the three papers involving losing electricity/industry are feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes, so input is welcome.
There is also the EA MOOC. There does not appear to be a counter—does anyone know how many completions of this course there has been?
You can always do more than the Giving What We Can minimum of 10%, but it is true it is aimed at pre-retirement income. Bolder Giving encourages committing 50% of lump sums or income, so this might be more appropriate for you. It does not require effectiveness, though there are a number of EAs on the site.
I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).
I think the argument was written up formally on the forum, but I’m not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.
The issue is that there are many sources of uncertainty of nuclear winter. When I developed a probabilistic model taking all these sources into account, I did get a median impact that was 2-3 C reduction (though I was also giving significant probability weight to industrial and counterforce strikes). However, I still got a ~20% probability of collapse of agriculture.
Accountants typically require a 4 year degree; vocational is generally 2 year degree or less.
Can it help enable Giving Tuesday matching despite many small donations throughout the year?
I generally agree. The question is whether we should call something an X-risk by the impact if it happens alone or by the impact*probability. If the latter, and if comets are an X-risk, then we should call extreme climate change (and definitely nuclear war) an X-risk.
I think it is useful to discuss qualifies as an X-risk. Asteroid/comet impact is widely regarded as an X-risk, but a big one that could cause human extinction might only have a one in a million probability in the next 100 years. This is a 0.0001% reduction in humanity’s long term value. However, if you believe 80,000 Hours that nuclear war might have a ~3% chance in the next 100 years and this could reduce the long term future potential of humanity ~30%, that is a ~1% reduction in the future of humanity this century. So practically speaking, it is much more of an X-risk than asteroids are. Similarly, if you believe 80k that extreme climate change has a ~3% chance in the next 100 years and it reduces the long run potential by ~20%, that is a 0.6% reduction in the long term future of humanity. This again is much larger than asteroids. I personally think the nuclear risk is higher and the climate risk is lower than these numbers. It is true that some of the long-term impact could be classified as trajectory changes rather than traditional X risk. But I think most people are interested in trajectory changes as well.
I’m not sure if this is answering the intent of the question, but one could refer undergrad/grad biologists to the biology part of effective thesis.
But remember, X-risk is not just extinction—there are many routes to long term future impacts from nuclear war—some are mentioned here.
I think EAs should look more into reducing trade barriers, both because of the global poverty benefits, but also because I think countries are less likely to go to (nuclear) war if they are economically dependent on each other.
Cattle’s feed climate impact could be reduced if they ate agricultural residues (like they used to and still often do in less developed countries). I don’t think that grass fed beef is really better because conventional cattle are grass fed part of their lives, so having some cattle completely grass fed means that the remainder would become a smaller percent grass fed. It looks like a little bit of seaweed reduces the methane from cattle.
Thanks! However, neurons in smaller organisms tend to be smaller. So I think the actual brain mass of humans would be similar to the land arthropods and the nematodes. Fish are larger organisms, so it does look like the brain mass of fish would be significantly larger than humans. There is the question of whether a larger neuron could provide more value or dis-value than a smaller neuron. If it is the same, then neuron count would be the relevant number.
Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!
Perhaps for some, but I think most people working on X-risk are primarily altruistically motivated. And for them, it is more important to stay alive in a catastrophe so they can help more. A less extreme version of this is living outside of metros to reduce the chance of being killed in a nuclear war.
What about an EA hotel in Australia/New Zealand? Safer from nuclear war and pandemics...