I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.
Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you’ll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.
Thanks! Very helpful.
As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.
Thanks again! I recall seeing data indicating that health was the #1 reason for becoming vegetarian, but I haven’t looked into this closely so I wouldn’t dispute your findings.
Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion
Here’s one question: which risks are you most concerned about?
I shy away from ranking risks, for several reasons:
The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they’re not really distinct risks.
Ultimately what’s important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk.
Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks.
And who do you think has the power to reduce those risks?
There’s the classic Margaret Mead quote, “Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has.” There’s a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don’t slack off! :)
That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it’s good to be able to work with all of them.
For what it’s worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It’s just easier for me to order the salad.
My interest in x-risk comes from wanting to work on big/serious problems. I can’t think of a bigger one than x-risk.
For what it’s worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It’s just easier for me to order the salad.
I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And it’s good for livestock welfare, which is still a good thing to help on. And it lowers global food prices, which is good for global poverty. And apparently it’s also healthy.
My interest in x-risk comes from wanting to work on big/serious problems. I can’t think of a bigger one than x-risk.
Yeah, same here. I think the most difficult ethical issue with gcr/xrisk is the idea that other, smaller issues don’t matter so much. It’s like we don’t care about the poor or something like that. What I say here is that no, it’s precisely because we do care about the poor, and everyone else, that it’s so important to reduce these risks. Because unless we avoid catastrophe, nothing else really matters. All that work on all those other issues would be for nothing.
We have an active synbio project modeling the risk and characterizing risk reduction opportunities, sponsored by the US Dept of Homeland Security: http://gcrinstitute.org/dhs-emerging-technologies-project.
I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.
Thanks! Very helpful.
Thanks again! I recall seeing data indicating that health was the #1 reason for becoming vegetarian, but I haven’t looked into this closely so I wouldn’t dispute your findings.
Thanks!
I shy away from ranking risks, for several reasons:
The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they’re not really distinct risks.
Ultimately what’s important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk.
Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks.
There’s the classic Margaret Mead quote, “Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has.” There’s a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don’t slack off! :)
That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it’s good to be able to work with all of them.
For what it’s worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It’s just easier for me to order the salad.
My interest in x-risk comes from wanting to work on big/serious problems. I can’t think of a bigger one than x-risk.
I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And it’s good for livestock welfare, which is still a good thing to help on. And it lowers global food prices, which is good for global poverty. And apparently it’s also healthy.
Yeah, same here. I think the most difficult ethical issue with gcr/xrisk is the idea that other, smaller issues don’t matter so much. It’s like we don’t care about the poor or something like that. What I say here is that no, it’s precisely because we do care about the poor, and everyone else, that it’s so important to reduce these risks. Because unless we avoid catastrophe, nothing else really matters. All that work on all those other issues would be for nothing.