I looked into this a while ago and ended up with a similar conclusion. The main options (to my knowledge) are NPT-UK, Prism the Gift Fund, and CAF’s giving account.
Their fees all seemed too high for me to actually open a DAF (although sometimes it’s not transparent and you’re just supposed to get in touch). In particular, yearly fees eat up a significant fraction of the money if you leave it in for decades, so it seems unsuitable for such plan. It’s probably so expensive because there are relatively few people who are interested in such accounts, and there is a lot of administrative work done by the fund (Gift Aid etc.).
Great work, thanks for writing this up!
Thanks for writing this up!
In this regard, Michael Greger (of Nutrition Facts) argues forcefully that anti-honey advocacy hurts the vegan movement. Many people apparently have trouble ascribing morally valuable states to cows and pigs. The idea that bees might suffer (and that we should care about their suffering) strikes these people as crazy. If an average person thinks that a small part of vegan ‘ideology’ is crazy, motivated reasoning will easily allow this thought to infect their perception of the rest of the vegan worldview. Hence, the knowledge that vegans care about bees may lead many people to show less compassion toward cows and pigs than they otherwise would.
Is there evidence that this is a significant effect? There are many lines of motivated reasoning, and if you avoid this one, perhaps people will just find another. My impression is that people who reject an idea or ideology because of some association with something ‘crazy’ are actually often just opposed to the idea/ideology in general, and would still be opposed if the ‘crazy’ thing wasn’t around.
Also, there is an effect in the opposite direction from moving the Overton window, or making others look more moderate. (Cf. https://en.wikipedia.org/wiki/Radical_flank_effect )
In sum, even if invertebrate welfare is a worthwhile cause, several factors may prevent us from considering this issue properly. Additionally, there is the worry that rushing into a direct advocacy campaign may create hard-to-reverse lock-in effects. If the initial message is suboptimal, these lock-in effects can impose substantial costs. Hence, directly advocating for invertebrate welfare at this time might be actively counterproductive, both to the invertebrate welfare cause area and effective altruism more generally.
While I agree that we should be very careful about publicity at this point, I feel like there might still be opportunities for thoughtful advocacy. It seems not implausible that we could find angles that are mainstream-compatible and begin to normalise concern for invertebrates—e.g. extending welfare laws to lobsters.
Great work—thanks for writing this up!
Here’s another proposal:
We give every contemporary citizen shares in a newly created security. This security settles in, say, 100 years (in 2119), and its settlement value will be based on the degree to which 2119 people approve of the actions of people in the 2019-2119 timespan, as determined by a standardised survey—say, on a scale from 0 to 10.
This gives contemporary people a direct financial incentive to do what future people would approve of, and uses market mechanisms to generate accurate judgments.
(One might think that this doesn’t work because people will go “I’ll be dead before this settles”, but I think this isn’t really a problem—there is also an Austrian bond that settles in 100 years, and that doesn’t seem to be a problem.)
A reason why it is not necessarily true that there is net suffering in nature is the hypothesis that small individuals–as invertebrates–may have less intense sentient experiences. In that scenario, small animals would experience relatively less suffering and more enjoyment than larger ones.
I don’t understand how this follows. Wouldn’t less intense experiences affect both suffering and pleasure equally?
The question of invertebrate sentience is surely important, but I’m not sure if further research on this is a top priority. Some relevant uncertainties:
Would further research significantly reduce uncertainty about invertebrate sentience? It seems that most people who thought about this have settled on something like “there is a significant chance that many invertebrate taxa are sentient, but we don’t know for sure”.
To what is society’s lack of moral concern for invertebrates due to the belief that invertebrates are not sentient, rather than other factors (e.g. disgust reaction towards many invertebrates, or the difficulty of avoiding harm to insects in everyday life)?
I’d like to suggest including an article on reducing s-risks (e.g. https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/ or http://s-risks.org/intro/) as another possible perspective on longtermism, in addition to AI alignment and x-risk reduction.
I don’t understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows:
Suppose we observe 20 past centuries during which no key event happens. By Laplace’s Law of Succession, we now think that the odds are 1⁄22 in each century. So you could say that the odds that a key event “would have occurred” over the course of 20 centuries is 1 - (1-1/22)^20 = 60.6%. However, we just said that we observed no key event, and that’s what our “hazard rate” is based on, so it is moot to ask what could have been. The probability is 0.
This seems off, and I think the problem is equating “no key event” with “not hingy”, which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there aren’t even any key events, or there are other ways to have a lasting impact.)
I don’t understand why this question has been downvoted by some people? It is a perfectly reasonable and interesting question. (The same holds for comments by Simon Knutsson and Magnus Vinding, which to me seem informative and helpful but have been downvoted.)
The following is yet another perspective on which prior to use, which questions whether we should assume some kind of uniformity principle:
As has been discussed in other comments and the initial text, there are some reasons to expect later times to be hingier (e.g. better knowledge) and there are some reasons to expect earlier times to be hingier (e.g. because of smaller populations). It is plausible that these reasons skew one way or another, and this effect might outweigh other sources of variance in hinginess.
That means that the hingiest times are disproportionately likely to be either a) the earliest generation (e.g. humans in pre-historic population bottlenecks) or b) the last generation (i.e. the time just before some lock-in happens). Our time is very unlikely to be the hingiest in this perspective (unless you think that lock-in happens very soon). So this suggests a low prior for HoH; however, what matters is arguably comparing present hinginess to the future, rather than to the past. And in this perspective it would be not-very-unlikely that our time is hingier than all future times.
In other words, rather than there being anything special about our time, it could just the case that a) hinginess generally decreases over time and b) this effect is stronger than other sources of variance in hinginess. I’m fairly agnostic about both of these claims, and Will argued against a), but it’s surely likelier than 1 in 100000 (in the absense of further evidence), and arguably likelier even than 5%. (This isn’t exactly HoH because past times would be even hingier.)
inverse relationship between population size and hingeyness
Maybe it’s a nitpick but I don’t think this is always right. For instance, suppose that from now on, population size declines by 20% each century (indefinitely). I don’t think that would mean that later generations are more hingy? Or, imagine a counterfactual where population levels are divided by 10 across all generations – that would mean that one controls a larger fraction of resources but can also affect fewer beings, which prima facie cancels out.
It seems to me that the relevant question is whether the present population size is small compared to the future, i.e. whether the present generation is a “population bottleneck”. (Cf. Max Daniel’s comment.) That’s arguably true for our time (especially if space colonisation becomes feasible at some point) and also in the rebuilding scenario you mentioned.
Do you think that this effect only happens in very small populations settling new territory, or is it generally the case that a smaller population means more hinginess? If the latter, then that suggests that, all else equal, the present is hingier than the future (though the past is even hingier), if we assume that future populations are bigger (possibly by a large factor). While the current population is not small in absolute terms, it could plausibly be considered a population bottleneck relative to a future cosmic civilisation (if space colonisation becomes feasible).
Great post! It’s great to see more thought going into these issues. Personally, I’m quite sceptical about claims that our time is especially influential, and I don’t have a strong view on whether our time is more or less hingy than other times. Some additional thoughts:I got the impression that you assume that some time (or times) are particularly hingy (and then go on to ask whether it’s our time). But it is also perfectly possible that no time is hingy, so I feel that this assumption needs to be justified. Of course, there is some variation and therefore there is inevitably a most influential time, but the crux of the matter is whether there are differences by a large factor (not just 1.5x). And that is not obvious; for instance, if we look at how people in the past could have shaped 21st century societies, it is not clear to me whether any time was especially important. I think a key question for longtermism is whether the evolution of values and power will eventually settle in some steady state (i.e. the end of history). It is plausible that hinginess increases as one gets closer to this point. (But it’s not obvious, e.g. there could just be a slow convergence to a world government without any pivotal events.) By contrast, if values and influence drift indefinitely, as they did so far in human history, then I don’t see strong reasons to expect certain times to be particularly hingy. So it is crucial to ask whether a (non-extinction) steady state will happen, and how far away we are from it. (See also this related post of mine.)”I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence.”Does this take into account that there have been fewer people around in 1600, and many ways to have an influence were far less competitive? I feel that a person in 1600 could have had a significant impact, e.g. via advocacy for the “right” moral views (e.g. publishing good arguments for consequentialism, antispeciesism, etc.) or by pushing for general improvements like reducing violence and increasing cooperation. So I don’t quite agree with your take on this, though I wouldn’t claim the opposite either – it is not obvious to me whether hinginess increased or decreased. (By your inductive argument, that suggests that it’s not clear whether the future will be more or less hingy than the present.)”A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point.”Similar to your recent point about how creating smarter-than human intelligence has long been feasible, I’d guess that, given strong enough motivation, a lock-in would already be feasible via brainwashing, propaganda, and sufficiently ruthless oppression of opposition. (We’ve had these “technologies” for a long time.) The reason why this doesn’t quite work in totalitarian states is that a) what you want to lock in is usually the power of an individual dictator or some group of humans, but there’s no way to prevent death, and b) people are not fully aligned with the dictator even at the beginning, which limits what you can do (principal-agent problems etc.). The reason we don’t it in liberal democracies is that a) we strongly disapprove of the necessary methods, b) we value free speech and personal autonomy, and c) most people don’t really mind moderate forms of value drift. So it’s to a large extent a question of motivation and taboos, and it is quite possible that people will reject the use of future lock-in technologies for similar reasons.
There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it’s striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that’ll be the decisive thing.
Strongly agree. I think it’s helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation—and maybe that’s what caused the growth mode change—but we’re far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don’t sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.)
So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation.
That said, I don’t really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I’m also not saying this would necessarily be a good thing.)
I disagree with your implicit claim that Will’s views (which I mostly agree with) constitute an extreme degree of confidence. I think it’s a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for “events that are at least as transformative as the industrial revolution”.
That base rate seems pretty low. And that’s not actually what we’re talking about—we’re talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on “AGI takeoff this century” seems not unreasonable to me. (You could, of course, believe that there is concrete evidence on AGI to justify different credences.)
On a different note, I sometimes find the terminology of “no x-risk”, “going well” etc. unhelpful. It seems more useful to me to talk about concrete outcomes and separate this from normative judgments. For instance, I believe that extinction through AI misalignment is very unlikely. However, I’m quite uncertain about whether people in 2019, if you handed them a crystal ball that shows what will happen (regarding AI), would generally think that things are “going well”, e.g. because people might disapprove of value drift or influence drift. (The future will plausibly be quite alien to us in many ways.) And finally, in terms to my personal values, the top priority is to avoid risks of astronomical suffering (s-risks), which is another matter altogether. But I wouldn’t equate this with things “going well”, as that’s a normative judgment and I think EA should be as inclusive as possible towards different moral perspectives.
Very interesting points! I largely agree with your (new) views. Some thoughts:
If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it’s unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than “median EA beliefs” – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)
What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA – would you agree with that?
I’d also be interested in more details on what these beliefs imply in terms of how we can improve the long-term future. I suppose you are now more sceptical about work on AI safety as the “default” long-termist intervention. But what is the alternative? Do you think we should focus on broad improvements to civilisation, such as better governance, working towards compromise and cooperation rather than conflict / war, or generally trying to make humanity more thoughtful and cautious about new technologies and the long-term future? These are uncontroversially good but not very neglected, and it seems hard to get a lot of leverage in this way. (Then again, maybe there is no way to get extraordinary leverage over the long-term future.)
Also, if we aren’t at a particularly influential point in time regarding AI, then I think that expanding the moral circle, or otherwise advocating for “better” values, may be among the best things we can do. What are your thoughts on that?
Thanks Jason – I’m excited to see more research on this!
What do you make of the possibility of flow-through effects on long-term attitudes towards insects / invertebrates? For instance, one could argue that entomophagy is particularly relevant because it involves a lot of people directly harming insects – which might, similar to meat consumption, bias people against giving moral weight to insects. (On the other hand, we already engage in many other everyday practices that harm insects or invertebrates – even just walking around outside will squash some bugs.)
Perhaps it would be interesting to study how the saliency of causing direct harm to insects / invertebrates affects people’s attitude?
Re: entomophagy, I think the problem isn’t just direct consumption, but also the use of insects as animal feed – see e.g. this article. Unlike directly eating insects, this doesn’t evoke a strong disgust reaction.