Jamie is Managing Director at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history.
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Jamie_Harris
I’m excited to see this post! Thanks for the suggestions. A few I hadn’t considered. In general though, this is an area I’ve thought about in various ways, at various points, so here’s my list of an additional “9 history topics it might be very valuable to investigate” (with some overlap with your list)!
I’ll start with some examples of categories of historical projects we’ve worked on at Sentience Institute.
1. The history of past social movements
Some overlap with your categories 3 and 8. This is to inform social movement strategy. At Sentience Institute, we’ve been focusing on movements that are 1) relatively recent, and 2) driven by allies, rather than the intended beneficiaries of the movement. This is because we are focusing on strategic lessons for the farmed animal movement, although I’ve recently been thinking about how it is applicable to other forms of moral circle expansion work, e.g. for artificial sentience (I have a literature review of writings on this coming out soonish).
Conducted by SI:
Kelly Anthis, “Social Movement Lessons From the British Antislavery Movement: Focused on Applications to the Movement Against Animal Farming” (December 1, 2017)
Me, “Social Movement Lessons From the US Anti-Abortion Movement” (November 26, 2019)
Me, “Social Movement Lessons from the US Anti-Death Penalty Movement” (May 22, 2020)
Me, “Social Movement Lessons from the US Prisoners’ Rights Movement” (should be out before the end of the month)
Not conducted by SI, but highly relevant:
Włodzimierz Gogłoza, “Abolitionist outrage: what the vegan movement can learn from anti-slavery abolitionism in the 19th century” (January 20, 2020)
Animal Charity Evaluators, “Children’s Rights” (February 2018)
Animal Charity Evaluators, “Environmentalism” (February 2018)
I’ve written a fuller post about “What Can the Farmed Animal Movement Learn from History” which discusses some methodological considerations; some of the discussion could be relevant to almost any “What can we learn about X from history” questions of interest to the EA movement. (As a talk here)
2. The history of new technologies, the industries around them, and efforts to regulate them.
This overlaps with your category 4. Sentience Institute’s interest has been in learning strategic lessons for the field of cellular agriculture, cultured meat, and highly meat-like plant-based foods, to increase the likelihood that these technologies are successfully brought to market and to maximise the effects that these technologies have on displacing animal products.
Conducted by SI:
J. Mohorčich, “What can nuclear power teach us about the institutional adoption of clean meat?” (November 28, 2017)
J. Mohorčich, “What can the adoption of GM foods teach us about the adoption of other food technologies?” (June 20, 2018)
J. Mohorčich, “What can biofuel commercialization teach us about scale, failure, and success in biotechnology?” (August 21, 2019)
3. Assessing the tractability of changing the course of human history by looking at historical trajectory shifts (or attempts at them).
Covered briefly in this post I wrote on “How tractable is changing the course of history?” (March 12, 2019). I didn’t do it very systematically. I was trying to establish the extent to which the major historical trajectory shifts that I examined were influenced by 1) thoughtful actors, 2) hard-to-influence indirect or long-term factors, 3) contingency, i.e. luck plus hard-to-influence snap decisions by other actors.
One approach could be to create (crowdsource?) a large list of possible historical trajectory shifts to investigate. Then pick them based on: 1) a balance of types of shift, covering each of military, technological, and social trajectory shifts, aiming for representativeness 2) a balance of magnitudes of the shifts, 3) time since the shift, 4) availability of evidence.
Some useful feedback and suggestions I had when I presented this work to a workshop by the Global Priorities Institute:
Gustav Arrhenius of Institute of Future Studies suggested to me that there was more rigorous discussion of grand historical theories than I was implying in that post. He recommended reading works by Pontus Strimling of the Institute of Future Studies, plus work by Jerry Cohen on Marxism plus by Marvin Harris on cultural materialism.)
Christian Tarsney (GPI) suggested that a greater case for tractability is in shaping the aftermath of big historical events (e.g. world wars) rather than in causing the those major events to occur.
William MacAskill (GPI) suggested that rather than seeking out any/all types of trajectory shifts, it might be more useful to look specifically for times where individuals knew what they wanted to change and then investigating whether they were able to do that or not. e.g. what’s the “EA” ask for people at the time of the French Revolution? It’s hard to know what would have been useful. There might be cases to study where people had a clearer ideas about how to shape the world for the better, e.g. in contributing to the writing of the bible.
Some other topics I’ve thought about much more briefly:
4. The history of the growth, influence, collapse, etc. of various intellectual and academic movements.
Overlaps with your category 3. I think of this as quite different to the history of social movements. Separately from direct advocacy efforts, EA is full of ideas of research fields that could be built or developed. The ones I’m most familiar with are “global priorities research,” “welfare biology,” and “AI welfare science” but I’m sure there are either more now, or there will be soon, as EAs explore new areas. For example, there were new suggestions in David Althaus and Tobias Baumann, “Reducing long-term risks from malevolent actors” (April 29, 2020). So working out how to most effectively encourage the growth and success of research fields seems likely to be helpful
Various historical research to help to clarify particular risk factors for s-risks will materialise in the future
These could each be categories on their own. Examples include:
5. To what extent have past societies prioritised the reduction of risks of high amounts of suffering and how successful have these efforts been?
6. Historical studies of “polarisation and divergence of values.”
7. “Case studies of cooperation failures” and other factors affecting the “likelihood and nature of conflict” (some overlap with your category 5. This was suggested by CLR. I had a conversation with Ashwin Acharya who also seemed interested in this avenue of research)
8. Study how other instances of formal research have influenced (or failed to influence) critical real-world decisions (suggested by CLR.)
9. Perhaps lower priority, but broader studies of the history of various institutions
The focus here would be on building an understanding of the factors that influence their durability. E.g. at a talk at a GPI workshop I attended, someone (Phillip Trammel? Anders Sandberg?) noted a bunch of types of institutions that have had some examples endure for centuries: churches, religions, royalty, militaries, banks, and corporations. Why have these institution types been able to last where others have not? Within those categories, why have some lasted where others have not.
Other comments and caveats:
Hopefully SI’s work offers a second example of an exception to the “recurring theme” you note in that 1) SI’s case studies are effectively a “deeper or more rigorous follow-up analysis” after ACE’s social movement case study project—if anything, I worry that they’re too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them, and 2) I at least had an undergraduate degree in history :D
On the “background in history” thing, my guess is that social scientists will usually actually be better placed to do this sort of work, rather than historians. (Some relevant considerations here)
Any of these topics could probably be covered briefly, with low rigour, in ~one month’s worth of work (roughly the timeframe of my tractability post, for example), or could literally use up several lifetime’s worth of work. It’s a tough call to decide how much time is worth spending on each case study. Some sort of time capping approach could be useful.
Relatedly, at some point, you face the decision of how to aggregate findings and analyse across different movements. I think we’re close to this with the first two research avenues I mention that we’ve been pursuing at SI. So if anyone reading this has ideas about how to pursue this further, I’d be interested in having a chat!
Many of the topics discussed here are relevant to Sentience Institute’s research interests. If you share those interests, you could apply for our researcher opening at the moment.
To write this post I’ve essentially just looked back through various notes I have, rather than trying to start from scratch and think up any and all topics that could be useful. So there’s probably lots we’re both missing, and I echo the call for people to think about areas where historical research could be useful.
It’s long been on my to-do list to go through GPI and CLR’s research agendas more thoroughly to work out if there are other suggestions for historical research on there. I haven’t done that to make this post so I may have missed things.
I was told that the Centre for the Governance of AI’s research agenda has lots of suggestions of historical case studies that could be useful, though I haven’t looked through this yet.
These topics probably vary widely in terms of the cost-effectiveness of time spent researching them. Of course, this will depend on your views on cause prioritisation.
Once I’ve looked into the above lists and thought about this more, I might improve this comment and make my own top-level post at some point. I was planning to do that at some point anyway but you forced my hand (in a good way) by making your own post.
I’m definitely interested in your interest in research for topic 10 on your list, so please keep me in the loop!
I’m very glad that people feel reluctant to express some of those opinions, especially in the unexplained, offensive format that they were expressed in those answers.
Also, some of the comments have very similar wording, which makes me suspect that someone/some people inputted multiple entries.
Given some of the issues raised on this thread, I suggest that either 80K should broaden its role and hire (lots) more staff to make this possible, or that new organisations should be set up to fill the gaps.
I’m glad to see the discussion of the “two visions.” I would guess that there is a discrepancy between how 80K thinks of its role (the second vision, focusing on key bottlenecks) and how most people, especially people newer to the EA community or not involved in EA meta orgs, think of 80K’s role (the first vision, focusing on broader social impact career advice).
When I come across someone who cares about making the world a better place / maximising their impact who is looking for career advice, I either point them towards 80K or discuss ideas with them that have almost entirely come from 80K. It may well be that 80K doesn’t see some of those people that I have conversations with as their intended target audience, but since 80K is the only EA org focusing on careers advice, I default to those recommendations. I would guess that many other people do the same.
A crude summary of some of the ideas here would be that increasing “inclination” is more important than increasing awareness from a long-term perspective. But if 80K is demoralising people new to the movement because it focuses on the second vision of its role over the first vision, then this probably decreases inclination quite a lot and so has negative long-term implications (even if in the short-term, it has higher impact).
Although I haven’t thoroughly looked at impact or cost-effectiveness metrics for 80K and other meta orgs, there are several factors that make me think that the EA community should prioritise devoting more resources to filling the gaps in the area of career advice:
1) Conversations about career decisions happen pretty regularly. Even if the most impactful thing for the handful of individuals working at 80K is indeed to focus on the narrower vision of their role, it seems important that other individuals work on the broader conception, so that these regular conversations that are happening anyway can be relatively informed.
2) Given that 80K focuses on the narrower vision, there is probably quite a lot of work that could be done relatively easily and be quite impactful if people were working on the broader vision (i.e. low hanging fruit)
3) We talk about EA movement-building not being funding constrained. If that’s the case, then presumably it’d be possible to create more roles, be that at 80K or at new organisations.
4) If I remember correctly, the EA survey suggests that 80K is an important entry point for lots of people into EA. It’s also a high-fidelity form of communication about EA ideas/research.
5) Generally there are loads of opportunities for impact that I can think of that a much larger 80K (or additional organisations also working on the intersection of EA and careers advice/decision making) could work on, that seem like they would plausibly have higher impact than some other ways that funds have been used for EA movement building that I can think of:
Research/website like 80K’s current career profile reviews, but including less competitive career paths (perhaps this would need to focus on quantity over quality and “breadth” over depth)
Career coaching calls (available all year round, for anyone focusing on any of the higher priority EA cause areas)
Regular career workshops, perhaps run through additional employees at local groups who are trained in how to run them, or perhaps as a single international organisation. This seems like a high fidelity method of EA outreach; if marketed well, I suspect these would get a lot of take-up. Targeted marketing to groups which are demographically under-represented in EA might also be a good way to start addressing diversity/inclusion/elitism concerns.
Research/webite/podcasts etc like 80K’s current work, but focusing on specific cause areas (e.g. animal advocacy broadly, including both farmed animals and wild animals)
Research/webite/podcasts etc like 80K’s current work, but focused on high school age students, before they’ve made choices which significantly narrow down their options (like choosing their degree).
In short, 80K does some amazing and important work, but there seems to be lots of space for the EA community to do more in the broad area of the intersection of EA and careers advice or decision-making. So it seems to me that either 80K should prioritise hiring more people to take up some of these opportunities, or EA as a movement should prioritise creating new organisations to take them up.
- Annotated List of Project Ideas & Volunteering Resources by 6 Jul 2020 3:29 UTC; 57 points) (
- 12 Jan 2019 10:29 UTC; 1 point) 's comment on List of possible EA meta-charities and projects by (
I also have the impression that some of the most productive people I know (within the EA community specifically) work very long hours.
Possibly unimportant, but what happened to EA Ventures? I stumbled across this because a paper by Roman V. Yampolskiy notes: “The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and Effective Altruism Ventures for partially funding his work on AI Safety.” The EA Ventures site now just redirects to CEA. There’s also a subsequent thread about “EA Ventures Request for Projects + Update.” Did it cease to exist after that? Why?
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- 6 Jul 2020 16:38 UTC; 20 points) 's comment on Announcing Effective Altruism Ventures by (
Initial feedback on the (first?) Episode with Chana: I liked the idea and know Chana has interesting things to say so decided to listen.
Was fun and kind of interesting but felt like I wasn’t sure what I was getting out of it.
I felt like wasn’t optimising for either ‘usefulness’ or ‘fun/relaxation’. E.g. I didn’t feel like I’d learned anything particularly surprising or useful by half way through the episode, and I felt like I was having less fun than I would by watching Netflix or chatting to my partner… So I stopped and went and did those things instead.
To be fair, this is a reason I don’t listen to podcasts all that much in general, but since this moved further away from obvious ‘usefulness’ than a usual 80k podcast, it made it seem less worthwhile.
Low confidence initial Impression though and I’ll probably listen to others!
To add in some ‘empirical’ evidence: Over the past few months, I’ve read 153 answers to the question “What is your strongest objection to the argument(s) and claim(s) in the video?” in response to “Can we make the future a million years from now go better?” by Rational Animations, and 181 in response to MacAskill’s TED talk, “What are the most important moral problems of our time?”.
I don’t remember the concern that you highlight coming up very much if at all. I did note “Please focus on the core argument of the video — either ‘We can make future lives go better’, or the framework for prioritising pressing problems (from ~2mins onwards in either video)”, but I still would have expected this objection to come up a bunch if it was a particularly prevalent concern. For example, I got quite a lot of answers commenting that people didn’t believe it was fair/good/right/effective/etc to prioritise issues that affect the future when there are people alive suffering today, even though this isn’t a particularly relevant critique to the core argument of either of the videos.
If someone wanted to read through the dataset and categorise responses or some such, I’d be happy to provide the anonymised responses. I did that with my answers from last year, which were just on the MacAskill video and didn’t have the additional prompt about focusing on the core argument, but probably won’t do it this year.
(This was as part of the application process to Leaf’s Changemakers Fellowship, so the answers were all from smart UK-based teenagers.)
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think it’s worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careers’ impact evaluation suffered similar issues within animal advocacy.)
For (2), I’m not sure why you don’t think 80 do this. E.g. the page on “What are the most pressing world problems?” has the following opening paragraph:
We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere.
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently “We’re most helpful for people who… Are interested in the problems we think are most pressing, which you can read about in our problem profiles.” The FAQ on “What are you looking for in the application?” mentions that one criterion is “Are interested in working on our pressing problems”.
Of course it would be possible to make it more prominent, but it seems like they’ve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but don’t share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldn’t expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, don’t already share the same cause prio rankings as 80k. You also suggest “when people do apply/email, it’s worth making that sort of caveat as well”, and that seems in the active deterrence ballpark to me; to the effect of ‘hey are you sure you want this call?’
+1 for writing a concise document outlining your needs. +1 for personally liking knowing someone is taking notes on what you’re saying.
I find it’s helpful to be especially clear about the stage of completion that something is. E.g. I’ve given detailed feedback on draft documents beforehand, only to realise by the end that the document was intended more as an incomplete brainstorm than a finished product. And I’ve failed to make that clear to others before and received unnecessarily specific feedback.
I suspect some of the advocates involved in the animal welfare victories listed here might be taken aback to see them listed as “in EA”. The movements for animal rights and animal welfare long predate effective altruism. What makes these things “in EA”?
Thanks for this post Michael, I think I agree with everything here! Though if anyone thinks we can “confidently dismiss the above longtermist argument for farmed animal welfare work, without needing to do this research” I’d be interested to hear why.
I won’t be pursuing those questions myself, as I’m busy with other projects
I just wanted to note that Sentience Institute is pursuing some of this sort of research, but (1) we definitely won’t be able to pursue all of these things any time soon, (2) not that much of our work focuses specifically on these cause prioritisation questions—we often focus on working out how to make concrete progress on the problems, assuming you agree that MCE is important. That said, I think a lot of research can achieve both goals. E.g. my colleague, Ali, is finishing up a piece of research that fits squarely in “4a. Between-subjects experiments… focused on the above questions” currently titled “The impact of perspective taking on attitudes and prosocial behaviours towards non-human outgroups.” And the more explicit cause prioritisation research would still fit neatly within our interests. SI is primarily funding constrained, so if any funders reading this are especially interested in this sort of research, they should feel free to reach out to us.
Contact the Sentience Institute and/or me to discuss ideas
Thanks for this note! Agreed. My email is jamie@sentienceinstitute.org if anyone does want to discuss these ideas or send me draft writeups for review.
I enjoyed this post. And I appreciated some of the explanation in the intro. E.g. I can imagine this list being inspiring for donors (and hadn’t thought about it like that before).
But is it much different from a list of (non-mega) project ideas?
E.g. see this comment:
“Rethink Priorities’ first incubated charity, Insect Welfare Project (provisional name) might be an example of launching something that eventually could absorb $100M when it finds an effective intervention and scales it. The Shrimp Welfare Project might be another example.”
You could apply this logic to almost any animal charity that’s trying to find interventions that are both cost-effective and scalable.
Once you adopt this perspective, the question could be switched from “which megaproject ideas can we think of?” to “how rapidly will we get diminishing returns on further investment in various plausibly cost-effective project ideas?”
The value of graduate training for EA researchers: researchers seem to think it is worthwhile
Imagine the average “generalist” researcher employed by an effective altruist / longtermist nonprofit with a substantial research component (e.g. Open Philanthropy, Founders’ Pledge, Rethink Priorities, Center on Long-Term Risk). Let’s say that, if they start their research career with an undergraduate/bachelor’s degree in a relevant field but no graduate training, each year of full-time work, they produce one “unit” of impact.
In a short Google Form, posted on the Effective Altruism Researchers and EA Academia Facebook groups, I provided the above paragraph and then asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a master’s degree in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response, from the 8 respondents, was 1.7.
I also asked: “If, as well as an undergraduate/bachelor’s degree, they start their research career at EA nonprofits with a PhD in a relevant field, how many “units” of impact do you expect that they would produce each year for the first ~10 years of work?”* The average response was 3.9.
I also asked people whether they were a researcher at a nonprofit, in academia, or neither, and whether they had graduate training themselves or not.** Unsurprisingly, researchers in academia rated the value of graduate training more highly than researchers in nonprofits (2.0 and 4.3 for each year with a master’s and a PhD, respectively, compared to 1.2 and 1.7), as did respondents with graduate training themselves, relative to respondents without graduate training (2.0 and 5.2 compared to 1.2 and 1.7).
I asked a free-text response question: “Do you think that the value of graduate training would increase/compound, or decrease/discount, the got further into their career?” 4 respondents wrote that the value of graduate training would decrease/discount the got further into their career, but didn’t provide any explanations for this reasoning. This was also my expectation; my reasoning was that one or more years’ of graduate training, which would likely only be partly relevant to the nonprofit work that you would be doing, would become relatively less important later on, since your knowledge, skills, and connections would have increased through your work in nonprofits.However, two respondents argued that the value of graduate training would increase/compound. One added: “People without PhDs are sadly often overlooked for good research positions and also under-respected relative to their skill. If they don’t have a PhD they will almost never end up in a senior research position.” The other noted that it would “increase/compound, particularly if they do things other than anonymous research, e.g. they build an impressive CV, get invited to conferences because of their track record. If one doesn’t have a PhD, the extent of this is limited, mostly unless one fits a high-credibility non-academic profile, e.g. founded an organization.”
I did some simple modelling / back of the envelope calculations to estimate the value of different pathways, accounting for 1) the multipliers on the value of your output as discussed in the questions on the form and 2) the time lost on graduate education.*** Tldr; with the multiplier values suggested by the form respondents, graduate education clearly looks worthwhile for early career researchers working in EA nonprofits, assuming they will work in an EA research nonprofit for the rest of their career. It gets a little more complex if you try to work it out in financial terms, e.g. accounting for tuition fees.
For my own situation (with a couple of years of experience in an EA research role, no graduate training), I had guessed multipliers of 1.08 and 1.12 on the value of my research in the ~10 years after completing graduate training, for a master’s and PhD, respectively. For the remaining years of a research career after that, I had estimated 1.01 and 1.02. Under these assumptions, the total output of a nonprofit research career with or without a master’s looks nearly identical for me; the output after completing a PhD looks somewhat worse. However, with the average values from the Google form then the output looks much better with a master’s than without and with a PhD than with just a master’s. Using the more pessimistic values from other EA nonprofit researchers, or respondents without graduate training, the order is still undergrad only < master’s < PhD, though the differences are smaller. In my case, tuition fees seem unlikely to affect these calculations much (see the notes on the rough models sheet).
Of course, which option is best for any individual also depends on numerous other career strategy considerations. For example, let’s think about “option value.” Which options are you likely to pursue if research in EA nonprofits doesn’t work out or you decide to try something else? Pursuing graduate training might enable you to test your fit with academia and pivot towards that path if it seems promising, but if your next best option is some role in a nonprofit that is unrelated to research (e.g. fundraising), then graduate education might not be as valuable.
I decided to post here partly in case others would benefit, and partly because I’m interested in feedback on/critiques of my reasoning, so please feel free to be critical in the comments!
*For both questions, I noted: “There are many complications and moderating factors for the questions below, but answering assuming the “average” for all other unspecified variables could still be helpful.)” and “1 = the same as if they just had a bachelor’s; numbers below 1 represent reduced impact, numbers above 1 represent increased impact.”
**These questions were pretty simplified, not permitting people to select multiple options.
*** Here, for simplicity, I assumed that:
- You would produce no value while doing your graduate training, which seems likely to be false, especially during (the later years of) a PhD.
- The value of 1 year after your graduate education was the same as 1 year before retirement, which seems likely to be false.
Great write up. I’m a fan of the systematic thinking and research. It’s interesting to compare how you approached it to how Charity Entrepreneurship are looking into non-profit startup opportunities. I’m interested in how you weighed up the decision criteria; was this just intuitive, based off the rest of the research, or did you have another approach?
One area where I might diverge from your approach here is in how you conceptualise expected social impact. I get the impression here (mainly from your use of “Filter #2: Social Impact—Comparing Animal Suffering”) that you primarily conceptualise the impact of a startup in terms of the the products that that startup produces and the animal products that it replaces counterfactually. But a broader conceptualisation of the impact of a startup might include its contribution (positive or negative) to the overall eventual success (i.e. market share) of plant-based meat and/or clean meat. In the long-term, this could well matter more for total impact.
So a startup which introduces a cellular agriculture product replacing an animal product that causes relatively small amounts of suffering might still be far more impactful than some other startup ideas (e.g. a startup that brings a good clean chicken product to market at a better price point than its competitors) if it helps to bring cellular agriculture products to market in a way that has wider public support. Although each of these examples has a long list of pros and cons, this specific goal might be better achieved by:
1) Focusing on animal products which aren’t actually eaten by humans, e.g. leather, pet food
2) Focusing on products which are more widely condemned by the public, e.g. foie gras
3) Focusing on marketising the products in countries which are more likely to be supportive, even if the total market is smaller, e.g. Singapore (see here).
In each of these examples, bringing the products to market in those specific contexts might increase consumer acceptance of the higher priority products, since they (or lots of people in other countries) will already be using cellular agriculture products.
A different approach might be to starting a B2B startup which focuses on providing a cheap—but also stable and secure—specific ingredient, e.g. growth media (this one overlaps with some of your suggestions). This might require that their business focuses on selling to a broader customer base, including medical companies and scientific researchers, to ensure that they have a business model that isn’t wholly dependent on the (potentially fluctuating) fortunes of the rest of the clean meat supply chain.
Potentially these strategic concerns might matter less for plant-based foods. I can think of ways it would influence decision-making though, like focusing heavily on price, so that less well-off people can access plant-based foods, to reduce the risk that plant-based food becomes confined to well-off people and specific demographics (hippies/hipsters) due to the real barrier that price puts up and/or due to public perceptions and identity issues.
Generally, I’m arguing for considering a long-term “strategic” perspective to thinking about the social impact of start-ups. J at Sentience Institute has written two technology adoption studies on nuclear power and GM foods which I think are helpful for thinking about these sorts of perspectives. He’s currently writing a third, on biofuels—I imagine that that will be similarly useful, and that we’ll start to see trends and patterns occurring across the technology adoption studies as he does more.
Less anecdotal but only indirectly relevant and also hard to distinguish causation from correlation:
Ctrl+f for “Individuals who participate in consumer action are more likely to participate in other forms of activism” here
https://www.sentienceinstitute.org/fair-trade#consumer-action-and-individual-behavioral-change
Hey, glad you liked the post! I don’t really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I’m missing something?
When it comes to limited time and resources, I’m not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they can have the most positive impact, but that’s just in the nature of there being lots of important problems we could plausibly work on.
On the more general category of s-risks vs extinction risks, it seems to be pretty unanimous that people focused on s-risks advocate cooperation between these groups. E.g. see Tobias Baumann’s “Common ground for longtermists” and CLR’s publications on “Cooperation & Decision Theory”. I’ve seen less about this from people focused on extinction risks, but I might just not have been paying enough attention.
I initially found myself nodding along with this post, but I then realised I didn’t really understand what point you were trying to make. Here are some things I think you argue for:
theoretically, EA could be either big tent or small tent
to the extent there is a meaningful distinction, it seems better in general for EA to aim to be big tent
Now is a particularly important time to aim for EA to be big tent
Here are some things that we could do help make EA more big tent.
Am I right in thinking these are the core arguments?
A more important concern of mine with this post is that I don’t really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I’m not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you’re advocating for?
Thanks very much for doing this work. I’m glad to see other people taking an interest in historical evidence to inform questions about global priorities and to inform strategies for moral circle expansion.
I think this is an Impressive overview to have created in a short period of time. And I like the efforts to explicitly assess causation, resisting the ever-present temptation to tell a chronological narrative and assume causal relationships where there is little evidence to suggest them.
Most of Sentience Institute’s case studies to date have focused primarily on one country, or a comparison between two countries. I found the big picture, international consideration interesting. In general, I’m updating slightly towards the importance of international pressure in causing further change and a strategy of, as you suggest, concentrating resources in particular promising locations so that representatives of those countries might sooner become international advocates. I was finding tentative evidence for similar claims in my case study of the US anti-death penalty movement, which includes some comparison to Europe (and briefer comparison to the wider international situation). If you haven’t read that, you may find that interesting.
One other thing I was quite excited about is the following comment:
Political short-termism usually works against future generations, but it can work for future generations if politicians’ and lobbyists’ concern with the short term keeps them from strongly opposing commitments to one day care about future generations… For future generations, this might look like advocating for policies, such as committees or funds for future generations, that will not be implemented for a decade or more.
I wasn’t quite sure how this followed from the historical evidence that you examine, but I thought it was a cool argument, and something I hadn’t thought about explicitly in terms of how longtermist moral circle expansion efforts might look different from neartermist work on animal advocacy or other cause areas that relate to MCE. If we care about, say, maximising the chances that factory farming ends, rather than helping animals as much as possible within the next 10 (or 100) years, then we might be able to effectively trade immediacy for increased radicalism (or durability or some other key priority).
————
Of course, with a post of this size, there are a lot of nitpicks and comments it’s tempting to offer. But I’ll avoid those and focus on what I think is my most substantial concern. Also, I’ll note that I read this post spread over several evenings, so if this is a little incoherent or inaccurate at times, I apologise!
It seems like you’re pursuing two separate goals in this research:
Identifying/assessing factors influencing the success of ally-based social movements (i.e. social movements whose intended beneficiaries are not the same as the advocates) in order to draw strategic implications for advocacy for future generations, which is an ally-based social movement,
Identifying/assessing factors that affect the interests of future generations.
Ideally, I don’t think you would mix these, e.g. in the inclusion criteria (i.e. the selection of the case studies), e.g. in creating a single model that blurs the two goals.
In line with goal (1), you have included several ally-based social movements: anti-slavery (mostly free people advocating for / deciding on the fate of slaves) and environmentalism (present-day humans advocating for / deciding on the fate of the environment). However, you also include movements that are not ally-based — oppressed peoples seeking to empower themselves through democratisation and people advocating for regulations on genetic engineering in order to protect themselves and human society more broadly. Since no justification was provided for the inclusion of democratisation, I was initially confused by this choice, but some clarity was offered by the justification for the inclusion of genetic engineering:
The governance of genetic engineering has reduced a significant threat to future generations: certain engineered pathogens could bring about human extinction, keeping future generations from existing.
Hence, I infer that goal (2) influenced the case study selection. This is supported by the justification for the inclusion of the environmentalism movement, which seems to mix (1) and (2):
environmental advocates have achieved significant successes for future generations, as well as other entities that have no direct political power: ecosystems.
I think this critique of the methodology is quite important, because it directly bears on one of the main arguments you advance in this research: “inclusive values” were not that important in driving change, which suggests that further MCE is not as likely as a simple extrapolation from the trend towards expanded moral circles in the past few centuries might imply.
Including a focus on movements that have only accidentally benefited future generations and then noting that the changes occurred mainly because they benefited powerful groups (present humans) rather than because people intended to help future generations seems tautological. (I think this might be a pretty uncharitable interpretation of your intentions; apologies if so, but hopefully it helps to make the point.) Hence, I think it’s more valuable to evaluate movements by their own goals, or at least by their effects on their intended beneficiaries (e.g. the environment rather than future generations for the environmentalism movement, e.g. present generations for genetic engineering).
By comparison, in selecting Sentience Institute’s case studies, we have focused on ally-based movements (with a secondary important consideration being chronological proximity). Hence, our case studies have been: Antislavery, anti-abortion, anti-death penalty, and prisoners’ rights (though the latter turned out to be less “ally-based” than I was expecting). I’ve also got one on the Fair Trade movement underway. These were chosen principally for comparability with the farmed animal movement but are similarly if not equally applicable to advocacy for future generations.
Although I see this concern as weakening the case that you put forward, I do think weak evidence is useful, and I’ve still updated my views a little away from the tractability of changing the course of history and likelihood of further MCE.
Thanks again for this very cool research!
Thanks for the bullet point list of recurring themes. I’d be interested in whether this series has led to substantial view updates among 80k’s staff?
I find it hard to know how much weight to place on the advice without knowing something about the person’s background, e.g. cause area they work in, organisation they work for, type of role they have, and engagement with EA.
Given that effective altruism is “a project that aims to find the best ways to help others, and put them into practice”[1] it seems surprisingly rare to me that people actually do the hard work of:
(Systematically) exploring cause areas
Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
Sharing their list and reasons publicly.[2]
The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy’s, and CEARCH’s list.
Related things I appreciate, but aren’t quite what I’m envisioning:
Tools and models like those by Rethink Priorities and Mercy For Animals, though they’re less focused on explanation of specific prioritisation decisions.
Longlists of causes by Nuno Sempere and CEARCH, though these don’t provide ratings, rankings, and reasoning.
Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation’s broader prioritisation process.
There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus.
If you know of other public writeups and explanations of ranked lists, please share them in the comments![3]
Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.
I’m a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain… and not at all systematic or thorough. I think I roughly:
- Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),
- Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes
- Didn’t ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes ‘not-core-EA™-cause-areas’ or based on criteria other than ITN).
I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project.
Rough and informal explanations welcome. I’d especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I’d like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.