This is helpful indeed. Thanks for the reply!
1. Good point on clarifying the timescale for the sake of the report. I think the timescale you define for the UK is about right for narrowing the scope of the institutions considered by the report. Then the “effectiveness” evaluation criterion can do the work of identifying which institutions are best by longtermist lights, ranking institutions cardinally as a function of, among other things, their temporal reach.
2. You did previously share your list with me and I’m glad you’ve reshared it here. Ideas you mention here did not end up on the list I shared to the EA Forum for one of a few reasons: either there exists a similar proposal in the document already or the suggested change is in my list of smaller, incremental changes or I excluded it because I wanted to prioritize concrete, particular proposals over abstract, general ideas. Some of them simply involve ideas that are still on my to-read list. All of your suggestions are included in a more complete list off-site.
3. Max Stauffer also recommended adding a criterion based on strength of evidence. I think this is a good idea. I also like your suggestion to broaden my “political feasibility” criterion to “overall implementability.” As you imply, there are considerations beyond political feasibility that are relevant to a design’s implementability. I’m incompletely convinced that symbolism should be ignored completely in the context of this report, but I have been convinced by your point that symbolic value depends on contextual interaction with a lot of things, and an otherwise uninspiring change can function as a symbol with the right packaging.
Thanks again for reaching out here and via email. I’ll be in touch about collaboration in just a moment.
Edit: Upon revisiting I realized that I had already read this paper. It’s one of the more useful things I’ve read in this area, so good nod.
Thanks! I’ve spoken to the APPG and seen some of their policy statements but I had not seen this particular paper. Super helpful.
It’s worth noting that one important assumption here is that experts are pretty good at determining the counterfactual value of past policy decisions. I think this is right, but if we gave it up then no system like this one would be effective, since the feedback from future generations would be near-random. On the other hand, if the assumption is correct then there should be some feasible system that provides useful intergenerational feedback of the kind described here, though it may need to include a mechanism for increasing the influence of experts in the decision process.
On (1), I’m not currently considering any existing institutions, other than existing variants of the proposals mentioned. You’re right that it would be useful to know which institutions we should preserve, and there also might be other things to learn from analyzing these institutions, such as what has worked well about them and what has kept them from working better. I’ll have to consider adding these sorts of institutions.
On (2), that’s definitely of concern to me in light of the fact that so many recently-adopted future-focused institutions have not been able to survive even one election cycle. I’ve been including this (the permanence of the institution) under effectiveness, but maybe it’s worth graining the categories a bit more finely.
I agree it will probably not change voter epistemic behavior. The thought was that it would change the epistemic behavior of the parties catering to voters and the representatives acting on behalf of the voters, since the voting rule will select for parties and representatives which are less short-termist. This of course can’t be guaranteed—if parties are not motivationally longtermist but are merely trying to appease voters to hold power, for example, it won’t change their epistemic incentives very much unless competing actors (parties, media) can demonstrate to young people that their plans are bad. But even in this case this is plausible.
Thanks, I’ve looked at some of the inclusive wealth and natural capital accounting stuff a little bit and will continue to do so. Do you currently have any sense how useful this sort of accounting will be for general future generations issues (incl. catastrophic risks, positive moral & economic trajectories) beyond concerns related to environmental degradation?
I am extremely interested in the question of how religions transmit ideas and values across many generations, but at the current moment I have no idea how they do this so successfully. If anyone has ideas or empirical sources on this I’d be quite keen to get more info on this.
Surprising (and confusing!) as it may be, there is some evidence that voters would vote differently with their Demeny vote than with their first vote.
I’ve asked Ben Grodeck (who clued me into Demeny voting) to weigh in with more data, but for now see this study from Japanese economist Reiko Aoki, who found (Table 8 and Figure 7) that the voting preferences of surveyed participants who are permitted to cast one vote on behalf of themselves and one vote for their child sometimes vote differently on their second vote. The effect isn’t drastic, but it is certainly non-trivial.
The study authors further find that policy preferences on behalf of oneself and on behalf of one’s children diverge to a greater degree, and the authors hypothesize that we would see more divergence between the multiple votes of Demeny voters if they had different political options that better reflected the divergence between these sets of preferences. Thus, they think that instituting Demeny voting would cause party platforms to change to try to cater to the policy preferences of parents voting on behalf of their children.
Excellent. This is a much better idea than the “allow the 2119 people to decide whether to sentence the grandchildren of the 2019 political leaders to the tribunal of death” feedback mechanism that, disturbingly, came to me more readily.
It would be interesting to think about whether there are other feasible ways to see to it that the decisions of future people provide an incentive for the actions for present people.
Two concerns I have with this general kind of scheme is that it requires citizens to have lots of faith that the relevant institutions and the policy will persevere 100 years into the future (the 100 year bond stuff is relevant to this) and that they might not play well with high rates of immigration (since fluidity in polity membership could undermine the efficacy of long-term feedback mechanisms for members of that polity). But these might just be details to be ironed out rather than insolvable problems with the design.
Thanks, I agree that pinpointing whether these institutions target the epistemic vs motivational (vs other) determinants of short-termism will be important. One more reason to do this is that the best solutions will combine a multiplicity of institutions and policies to address all of the different sources of short-termism without reduplicating effort.
Also note that most institutions will do at least a little bit of both. The government think tank will also address some motivational failings by providing more government officials focused on the long-term and by creating coordination points for government action, while generally we might expect that making a body more motivated to improve the future (such as with AWV) will make it more likely to seek good information about the future.
There is also some direct evidence on voting. I think the best evidence is the paper that Will cites in his age weighted voting post. Ahfeldt et al. found that across 82 studied referenda, the elderly voted largely in their generational self-interest.
There are some complications. For example, there is some evidence that referenda are easier to manipulate via advertising campaigns than other polls, which might lead people to vote more in self-interest here than elsewhere.
I think this remains an open question, but it’s one I’m looking into more carefully over the next month.
That’s true, thanks for your comment. I didn’t say this exactly, but some of the policies proposed above are suggested in what I think is the same spirit. E.g., adding the submajority delay rule or age quotas to these upper houses would plausibly make them more longtermist. If you have other specific ideas about ways of reforming legislative houses that make them more longtermist I would be quite interested to hear them.
This is false. Jacy was accused of sexual harassment at Brown, never sexual assault. Some members of this community have conflated Jacy’s case with the case of another student, which for some reason shows up in google searches for Jacy’s name. This is an understandable confusion, but it is a very bad confusion to continue spreading.
thanks for the clarification on (3), gregory. i exaggerated the strength of the valence on your post.
on (1), i think we should be skeptical about self-reports of well-being given the pollyanna principle (we may be evolutionarily hard-wired overestimate the value of our own lives).
on (2), my point was that extinction risks are rarely confined to only human beings, and events that cause human extinction will often also cause nonhuman extinction. but you’re right that for risks of exclusively human extinction we must also consider the impact of human extinction on other animals, and that impact—whatever its valence—may also outside the impact of the event on human well-being.
thanks, gregory. it’s valuable to have numbers on this but i have some concerns about this argument and the spirit in which it is made:
1) most arguments for x-risk reduction make the controversial assumption that the future is very positive in expectation. this argument makes the (to my mind even more) controversial assumption that an arbitrary life-year added to a presently-existing person is very positive, on average. while it might be that many relatively wealthy euro-american EAs have life-years that are very positive, on average, it’s highly questionable whether the average human has life-years that are on average positive at all, let alone very positive.
2) many global catastrophic risks and extinction risks would affect not only humans but also many other sentient beings. insofar as these x-risks are risks of the extinction of not only humans but also nonhuman animals, to make a determination of the person-affecting value of deterring x-risks we must sum the value of preventing human death with the value of preventing nonhuman death. on the widely held assumption that farmed animals and wild animals have bad lives on average, and given the population of tens of billions of presently existing farmed animals and 10^13-10^22 presently existing wild animals, the value of the extinction of presently living nonhuman beings would likely swamp the (supposedly) negative value of the extinction of presently existing human beings. many of these animals would live a short period of time, sure, but their total life-years still vastly outnumber the remaining life-years of presently existing humans. moreover, most people who accept a largely person-affecting axiology also think that it is bad when we cause people with miserable lives to exist. so on most person-affecting axiologies, we would also need to sum the disvalue of the existence of future farmed and wild animals with the person-affecting value of human extinction. this may make the person-affecting value of preventing extinction extremely negative in expectation.
3) i’m concerned about this result being touted as a finding of a “highly effective” cause. $9,600/life-year is vanishingly small in comparison to many poverty interventions, let alone animal welfare interventions (where ACE estimates that this much money could save 100k+ animals from factory farming). why does $9,600/life-year suddenly make for a highly effective when we’re talking about x-risk reduction, when it isn’t highly effective when we’re talking about other domains?
Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this—I was talking in terms of these benefits as “pure” benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly’s piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I’ve pointed out above are “pure” because they come along for free with that labor involved in making the EA community more inclusive, and don’t require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting—even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.
I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you’ve characterized that. I don’t think that we should be a community doing work that fosters “fast progress on the most important things,” because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering “fast progress” on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered—unless we define “fosters fast progress” in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.
To your question as to why “the magnitude of the current EA movement’s contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans,” I unfortunately haven’t written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don’t think it’s obvious that the EA movement’s contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was “very small” compared to the positive effects we have. It’s something more EAs should treat as non-negligible more often than they do.
Still, here are some of the basic reasons why I think that the EA movement’s contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky—much stickier than norms favoring helping others except in certain particular cases (e.g. within one’s own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I’m out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won’t come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it’s reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction—measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.
I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there’s a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.
Thanks so much for this thoughtful and well-researched write-up, Kelly. The changes you recommend seem extremely promising and it’s very helpful to have all of these recommendations in one place.
I think that there are some additional reasons that go beyond those stated in this post that increase the value of making the EA a more diverse and inclusive community. First, if the EA movement genuinely aspires to cause-neutrality, then we should care about benefits that accrue to others regardless of who these other people are and independent of what the causal route to these benefits is. As such, we should also care about the benefits that becoming a diverse and inclusive movement would have for women, people of color, and disabled and trans people in and outside of the community. If, as you argue and as is antecedently quite plausible, the EA movement is essentially engaging in the very same discriminatory practices in our movement-building as people tend to engage in everywhere else, then as a result we are artificially boosting the prestige, visibility, and status perception of white, cis, straight, able-bodied men, we are creating a community that is less sensitive to stereotype threat and to micro- and macroaggressions than it otherwise could be, and we are giving legitimacy to stereotypes and to business and nonprofit models which arbitrarily exclude many people. All of this causes harm or a reduction in the status or power of women, people of color, and disabled and trans people and advances their discrimination—which is a real and significant cost to organizing in this way.
Second, even if one thinks that this effect size will be very small compared to the good that the EA movement is doing (which is less obvious than EAs sometimes assume without argument), 1) these are still pure benefits, which strengthens the case for and the reasons favoring improving the EA community in the respects you argue, and 2) if the EA community fails to become more diverse and inclusive we’ll suffer reputation costs in the media, in academia, among progressives, and in the nonprofit world for being a community that is exclusionary. This would come at a significant cost to our potential to build a large and sustainable movement and to create strong, elite networks and ties. And at this point, this worry is very far from a mere hypothetical:
I think we have our work cut out for us if we want to build a better reputation with the world outside of our (presently rather small) community, and that the courses of action you recommend will go quite a long way to getting us there.
Thanks for sharing! That’s good to know.
I have a good friend who is a thorough-going hedonistic act utilitarian and a moral anti-realist (I might come to accept this conjunction myself). He’s a Humean about the truth of utilitarianism. That is, he thinks that utilitarianism is what an infinite number of perfectly rational agents would converge upon given an infinite period of time. Basically, he thinks that it’s the most rational way to act, because it’s basically a universalization of what everyone wants.