We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.
Neat, I’ll have to get in touch, thanks.
I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don’t seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.
My impression is that people do over-estimate the cost of ‘not-eating-meat’ or veganism by quite a bit (at least for most people in most situations). I’ve tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.
So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical impact of any kind, it would probably be $500 (with hindsight—cue the standard list of possible biases). This doesn’t seem very high to me. My experience has been that most people who have become vegan have said that they vastly overestimated the sacrifice they thought was involved.
If one thought that there were diminishing returns for the sacrifice for being vegan over vegetarian, perhaps the calculus is better for being vegetarian over non-vegan, or for being vegan 99% of the time, say only when eating at your grandparents’ house. I see too many people say ‘well I can’t be vegan because I don’t want to upset my grandpa when he makes his traditional X dish’. Well, ok, so be vegan in every other aspect then. And as a personal anecdote, when my nonna found out she couldn’t make her traditional Italian dishes for me anymore, she got over it very quickly and found vegan versions of all of them [off-topic, apologies!].
I also suspect that people are comfortable thinking about longtermism and sacrifice like this for non-humans but not for humans is because they may think that humans are still significantly more important. I think this is the case when you count flow-on effects, but not intrinsically (e.g. 1 unit of suffering for a human vs non-human).
I think the intrinsic worth ratio for most non-human animals is close to 1 to 1. I think the evidence suggests that their capacity for suffering is fairly close to ours, and some animals might arguably have an even higher capacity for suffering than us (I should say I’m strictly wellbeing/suffering based utilitarian in this).
I think the burden of proof should be on someone to show why humans are significantly more worthy of intrinsic moral worth. We all evolved from a common ancestor, and while there might be a sliding scale of moral worth from us to insects, it seems strange for there to be such a sharp drop off after humans, even within mammals. I would strongly err on the side of caution when applying this to my ethics, given our constantly expanding circle of moral consideration throughout history.
Self-plugging as I’ve written about animal suffering and longtermism in this essay:
To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terraforming it. This would enormously increase the amount of wild-animal suffering. Even if we never leave our solar system, terraforming just one body, let alone several, could near double the amount of wild-animal suffering. There’s also the possibility of bringing factory farms to Mars. I’m studying a PhD in space science and still get shut down when I try to say ‘hey lets maybe think about not bringing insects to Mars’. This is far off from being a practical concern (maybe 100-1000 years) but it’s never too early to start shifting social norms.
I’d call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.
I’d like to echo Peter’s point as well. We don’t ask these questions for a lot of other actions that would be unethical in the short term. There seems to be a bias in EA circles of asking this kind of question about non-human animal exploitation. I’m more arguing for consistency than saying we can’t argue that a short term good has a long term bad resulting in net bad.
Thanks for sharing, I’ve saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel ‘left out’? Are there plans for other EAGx conferences in Europe?
For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of ‘ethical’ reactions and not just ‘technical’ reactions?
Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don’t consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don’t succumb to X-risk, assuming they are all (or mostly) positive.
Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.
I have one concern about this which might reduce estimates of its impact. Perhaps I’m not really understanding it, and perhaps you can allay my concerns.
First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.
But if we grant that we did indeed pick the best candidate, there doesn’t seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game where supporters of candidate A are vote swapping as much as supporters of candidate B. So on the margin, engaging in vote swapping seems obviously good, but at a system level, promoting vote swapping seems less obviously good.
Does this make any sense?
Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.
I read a good paper on this but unfortunately I don’t have access to my drive currently and can’t recall the name.
I’d like to steelman a slightly more nuanced criticism of Effective Altruism. It’s one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.
Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call ‘working within the system’, a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, ‘you can solve all the world’s problems by donating enough’, I might have reservations too. They worry that EA does not pay enough credence to the value of building community and social ties.
Of course, articles like this (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/) have been written, but it seems this is still being overlooked. I’m not arguing we should necessarily spend more time trying to convince people that EAs love systemic change, but it’s important to recognise that many people have, what sounds to them, like totally rational criticisms.
Take this criticism (https://probonoaustralia.com.au/news/2015/07/why-peter-singer-is-wrong-about-effective-altruism/ - which I responded to here: https://probonoaustralia.com.au/news/2016/09/effective-altruism-changing-think-charity/). Even after addressing the author’s concerns about EA focusing entirely on donating, he still contacted me with concerns that EA is going to miss the unintended consequences of reducing community ties. I disagree with the claim, but this makes sense given his understanding of EA.
Thanks for this Peter, you’ve increased my confidence that supporting SHIC was a good thing to do.
A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I’ve mentioned): I’m unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.
The model is largely self-sustaining, and students always look forward to the next weekend conference, which is full of fun activities.
At this point I don’t have an idea for how such a model might be applied to SHIC, but it could be worth keeping in mind for the future.
An alternative might be to approach UNYA to get a SHIC workshop into their curriculum. I don’t know how open they would be to this, but I’m willing to try through my contacts with UNYA in Adelaide.
This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.
Any thoughts on the worst possible ethical theory?
Thanks for this Kerry. I’m surprised that cold email didn’t work, as I’ve had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?
Depending on the event, I’ve had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally. EA Sydney also had a lot of success promoting an 80K event partly by cold contacting university faculty heads asking them to share the workshop with their students (though I note Peter Slattery would be much better to chat to about the relative success of different promotional methods for this last one).
Could you please expand on what you mean by “Identify one “superhero” EA”? What is the purpose of this?
People have made some good points and they have shifted my views slightly. The focus shouldn’t be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I’m strawmanning myself here slightly).
However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can’t imagine that this is a good thing, for some of the reasons I’ve described above.
How can we get everyone to agree on the best ethical theory?
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill’s Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can’t be 100% certain about which action is best, but that doesn’t mean there isn’t an answer.
Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you’re just not applying utilitarianism correctly.
Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach… etc.).
Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!
Your third point is well taken—I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.