Thanks!
An audiobook is a good idea and I’ll look into it, though I don’t expect it to be done any time soon (i.e. it would at least take several months, I think).
Thanks!
An audiobook is a good idea and I’ll look into it, though I don’t expect it to be done any time soon (i.e. it would at least take several months, I think).
The easiest way to download it as an epub is here.
I agree with this answer.
I don’t need to be persuaded to care about animal/insect/machine suffering in the first place.
That’s great, because that is also the starting point of my book. From the introduction:
Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is experienced. A future individual is no less (and no more) deserving of moral consideration than someone alive now. So the fact that a moral catastrophe takes place in the distant future does not reduce the urgency of preventing it, if we have the means to do so. I will assume that you broadly agree with these fundamental values, which form the starting point of the book.
That is, I’m not dwelling on an argument for these fundamental values, as that can be found elsewhere.
Thanks for writing this up! It’s great to see more people think about the relationship between animal advocacy and longtermism.
It seems important to distinguish between a) the abolition of factory farming and b) a long-term change in human attitudes towards animals (i.e. establishing antispeciesism). b) is arguably more important from a long-term perspective, and it is a legitimate concern cultivated meat (and similar technologies) would only achieve a).
However, proponents of the “technology-based strategy” usually argue that a) also indirectly helps achieve b), as it allows people to endorse animal rights without cognitive dissonance. I am not entirely sure about this, but it’s at least a plausible counterconsideration.
Even without this effect, I don’t really understand why you seem to think that abolishing factory farming through non-moral means would cause lock-in. Why can’t attitude change / moral progress still happen later?
Thanks! I’ve started an email thread with you, me, and David.
Thanks for the comment, this is raising a very important point.
I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of “in the right way” should be taken very seriously—I’m much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.
This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE).
‘Longtermism’ just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of “improving”. Most people who are both longtermist and suffering-focused work on preventing s-risks.
That said, despite endorsing suffering-focused ethics myself, I think it’s not helpful to frame this as “not caring” about existential risks; there are many good reasons for cooperation with other value systems.
I’m somewhat less optimistic; even if most would say that they endorse this view, I think many “dedicated EAs” are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won’t go away entirely just by endorsing an abstract philosophical argument. (And I’m not sure if “most” endorse that argument to begin with.)
Fair point—the “we” was something like “people in general”.
This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.
I agree that having a multi-party system might be most important, but I don’t think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they’re using a two-round system rather than IRV).
Everything is subject to tactical voting (except maybe SODA? but I don’t understand that argument). So I don’t see this as a point against approval voting in particular.
I think that approval voting has significantly more serious tactical voting problems than IRV. Sure, they all violate some criteria, but the question is how serious the resulting issues are in practice. IRV seems to be fine based on e.g. Australia’s experience. (Of course, we don’t really know how good or bad approval voting would be, because it is rarely used in competitive elections.)
Great post—thanks a lot for writing this up!
It’s quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: “So you would torture a child in order to create ten new brains that experience extremely intense orgasms?” The politician, being honest, says yes, and that’s the end of her career.
By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I’m not saying anything about which standards are better, and of course this does not only apply to CU.)
The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n.
The fact that I consider a certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F.
Also, this principle can have pretty strange implications depending on how you apply it. For instance, if I look at the population of all beings on Earth, it is extremely surprising (10^-12 or so) that I am a human rather than an insect.
I’m at a period of unusually high economic growth and technological progress
I think it’s not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.
So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no.
I’m very sympathetic to patient philanthropy, but this seems to overstate the required amount of evidence. Taking into account that each time has donors (and other resources) of their own, and that there are diminishing returns to spending, you don’t need to have extreme beliefs about your elevated influentialness to think that spending now is better. However, the arguments you gave are not very specific to 2020; presumably they still hold in 2100, so it stands to reason that we should invest at least over those timeframes (until we expect the period of elevated influentialness to end).
One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good?
A bag of oats is presumably much more relative wealth in those other times than now. The current price of a ton of oats is GBP 120 per ton, so if the bag contains 50 kg, it’s worth just GBP 6.
People in earlier times also have less ‘competition’. Presumably the medieval person could have been the first to write up arguments for antispeciesism or animal welfare; or perhaps they could have a significant impact on establishing science, increasing rationality, improving governance, etc.
(All things considered, I think it’s not clear if earlier times are more or less influential.)
I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.
We’ve now put together a new and improved audio version, which can be found here.