Thanks!
An audiobook is a good idea and I’ll look into it, though I don’t expect it to be done any time soon (i.e. it would at least take several months, I think).
Thanks!
An audiobook is a good idea and I’ll look into it, though I don’t expect it to be done any time soon (i.e. it would at least take several months, I think).
The easiest way to download it as an epub is here.
I agree with this answer.
I don’t need to be persuaded to care about animal/insect/machine suffering in the first place.
That’s great, because that is also the starting point of my book. From the introduction:
Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is experienced. A future individual is no less (and no more) deserving of moral consideration than someone alive now. So the fact that a moral catastrophe takes place in the distant future does not reduce the urgency of preventing it, if we have the means to do so. I will assume that you broadly agree with these fundamental values, which form the starting point of the book.
That is, I’m not dwelling on an argument for these fundamental values, as that can be found elsewhere.
Thanks for writing this up! It’s great to see more people think about the relationship between animal advocacy and longtermism.
It seems important to distinguish between a) the abolition of factory farming and b) a long-term change in human attitudes towards animals (i.e. establishing antispeciesism). b) is arguably more important from a long-term perspective, and it is a legitimate concern cultivated meat (and similar technologies) would only achieve a).
However, proponents of the “technology-based strategy” usually argue that a) also indirectly helps achieve b), as it allows people to endorse animal rights without cognitive dissonance. I am not entirely sure about this, but it’s at least a plausible counterconsideration.
Even without this effect, I don’t really understand why you seem to think that abolishing factory farming through non-moral means would cause lock-in. Why can’t attitude change / moral progress still happen later?
Thanks! I’ve started an email thread with you, me, and David.
Thanks for the comment, this is raising a very important point.
I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of “in the right way” should be taken very seriously—I’m much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.
This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE).
‘Longtermism’ just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of “improving”. Most people who are both longtermist and suffering-focused work on preventing s-risks.
That said, despite endorsing suffering-focused ethics myself, I think it’s not helpful to frame this as “not caring” about existential risks; there are many good reasons for cooperation with other value systems.
I’m somewhat less optimistic; even if most would say that they endorse this view, I think many “dedicated EAs” are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won’t go away entirely just by endorsing an abstract philosophical argument. (And I’m not sure if “most” endorse that argument to begin with.)
Fair point—the “we” was something like “people in general”.
This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.
I agree that having a multi-party system might be most important, but I don’t think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they’re using a two-round system rather than IRV).
Everything is subject to tactical voting (except maybe SODA? but I don’t understand that argument). So I don’t see this as a point against approval voting in particular.
I think that approval voting has significantly more serious tactical voting problems than IRV. Sure, they all violate some criteria, but the question is how serious the resulting issues are in practice. IRV seems to be fine based on e.g. Australia’s experience. (Of course, we don’t really know how good or bad approval voting would be, because it is rarely used in competitive elections.)
Great post—thanks a lot for writing this up!
It’s quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: “So you would torture a child in order to create ten new brains that experience extremely intense orgasms?” The politician, being honest, says yes, and that’s the end of her career.
By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I’m not saying anything about which standards are better, and of course this does not only apply to CU.)
The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n.
The fact that I consider a certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F.
Also, this principle can have pretty strange implications depending on how you apply it. For instance, if I look at the population of all beings on Earth, it is extremely surprising (10^-12 or so) that I am a human rather than an insect.
I’m at a period of unusually high economic growth and technological progress
I think it’s not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.
So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no.
I’m very sympathetic to patient philanthropy, but this seems to overstate the required amount of evidence. Taking into account that each time has donors (and other resources) of their own, and that there are diminishing returns to spending, you don’t need to have extreme beliefs about your elevated influentialness to think that spending now is better. However, the arguments you gave are not very specific to 2020; presumably they still hold in 2100, so it stands to reason that we should invest at least over those timeframes (until we expect the period of elevated influentialness to end).
One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good?
A bag of oats is presumably much more relative wealth in those other times than now. The current price of a ton of oats is GBP 120 per ton, so if the bag contains 50 kg, it’s worth just GBP 6.
People in earlier times also have less ‘competition’. Presumably the medieval person could have been the first to write up arguments for antispeciesism or animal welfare; or perhaps they could have a significant impact on establishing science, increasing rationality, improving governance, etc.
(All things considered, I think it’s not clear if earlier times are more or less influential.)
I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.
Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the “patient vs impatient actors” story does not add up.
It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)
I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.
Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).
Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant evidence that at least an extreme version of this is not happening.
Re: 3., I think the levels of engagement and feedback we have received so far are encouraging. However, we do not currently have any procedures in place to measure impact, which is (as you say) incredibly hard for what we do. But of course, we are constantly thinking about what kind of work is most impactful!
I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.
It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.
I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.
I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, and writings by Simon Knutsson and Magnus Vinding.
We’ve now put together a new and improved audio version, which can be found here.