Jamie is Managing Director at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history.
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Jamie_Harris
I tried doing this a while back. Some things I think I worried about at the time:
(1) disheartening people excessively by sending them scores that seem very low/brutal, especially if you use an unusual scoring methodology (2) causing yourself more time costs than it seems like at first, because (a) you find yourself needing to add caveats or manually hide some info to make it less disheartening to people, (b) people ask you follow-up questions (3) exposing yourself to some sort of unknown legal risk by saying something not-legally-defensible about the candidate or your decision-making.
(1) turned out to be pretty justified I think, e.g. at least one person expressing upset/dissatisfaction at being told this info. (2) definitely happened too, although maybe not all that many hours in the grand scheme of things (3) we didn’t get sued but who knows how much we increased the risk by
Thank you!
I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that.
Btw, if you had 5-10 mins spare I think it’d be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don’t know what “MEV” stands for, or what the “cost-effectiveness” or “cause no.” columns are referring to. (Currently these things mean that I probably won’t share the spreadsheet with people because I’d need to do a lot of explaining or caveating to them, whereas I’d be more likely to share it if it was more self-explanatory.)
Thanks! When you say “median in quality” what’s the dataset/category that you’re referring to? Is it e.g. the 3 ranked lists I referred to, or something like “anyone who gives this a go privately”?
Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!
First I think precise ranking of “cause areas”is nearly impossible as its hard to meaningfully calculate the “cost-effectiveness” of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has probably already been tried and researched to some degree at least.
There’s a lot going on here. I suspect I’m more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions (“interventions”), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you’re evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don’t think I endorse the view that “you at least need to have an intervention which has probably already been tried and researched to some degree at least.”
Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous.
I agree with the reputational risks and the potential for people to misunderstand your claim or think that it’s more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it’s implicitly pretty clear that there’s a lot of subjectivity and difficult decision-making going into this. (I don’t agree with it being “meaningless” or “dishonest”—I think that relates to the points above.)
Also I personally think that GiveWell might do the most work which achieves the substance of what you are looking for within global health and wellbeing. Also like you mentioned the Copenhagen Consensus also does a pretty good job of outlining what they think might be the 12 best interventions (best things first) with much reasoning and calculation behind each one.
Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I’ve added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I’m not very familiar with global health.)
I’d be interested to hear what you think might be the upsides of “ranking” specifically vs clustering our best estimates at effective cause areas/interventions.
Oh this might have just been me using unintentionally specific language. I would have included “tiered” lists as part of “ranked”. Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I’ve edited the original post to add the word “tiered”. (Is that what you meant by “clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)
Thanks again!
Given that effective altruism is “a project that aims to find the best ways to help others, and put them into practice”[1] it seems surprisingly rare to me that people actually do the hard work of:
(Systematically) exploring cause areas
Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
Sharing their list and reasons publicly.[2]
The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy’s, and CEARCH’s list.
Related things I appreciate, but aren’t quite what I’m envisioning:
Tools and models like those by Rethink Priorities and Mercy For Animals, though they’re less focused on explanation of specific prioritisation decisions.
Longlists of causes by Nuno Sempere and CEARCH, though these don’t provide ratings, rankings, and reasoning.
Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation’s broader prioritisation process.
There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus.
If you know of other public writeups and explanations of ranked lists, please share them in the comments![3]
- ^
Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.
- ^
I’m a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain… and not at all systematic or thorough. I think I roughly:
- Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),
- Had that hypothesis worn down by various information and arguments I encountered and changed my views on the top causes
- Didn’t ever go back and do a systemic cause prioritisation exercise from first principles (e.g. evaluating cause candidates from a long-list that includes ‘not-core-EA™-cause-areas’ or based on criteria other than ITN).
I suspect this is pretty common. I also worry people are deferring too much on what is perhaps the most fundamental question of the EA project.
- ^
Rough and informal explanations welcome. I’d especially welcome any suggestions that come from a different methodology or set of worldviews & assumptions to 80k and Open Phil. I ask partly because I’d like to be able to share multiple different perspectives when I introduce people to cause prioritisation to avoid creating pressure to defer to a single list.
Oh my suggestion wasn’t necessarily that they’realternatives to receiving any donations; they could be supplements. They could be things you experiment with that could help to make the channel more sustainable and secure.
Sad news for https://pivotalcontest.org/
(I’m shocked that EA now has two “Blue Dot”s and two “Pivotal”s—neither of which has the words “effective”, “institute”, or “initiative” anywhere to be seen.)
It seems like video release frequency is a significant bottleneck for you?
I’m not sure what the main time costs are. But some guesses of things that might help:
E.g.
freelancers, as you say
going for less thoroughly edited videos
doing some crowdsourcing or having volunteers/collaborators help write scripts
using LLMs more in the writing
doing interviews, article readouts, or other formats that enable you to produce long-form content fairly quickly (perhaps mixed in with the existing formats)
just setting yourself aggressive targets and working it out as you go
(FWIW I feel slightly surprised they take so long to create, but I’ve never tried creating videos as high quality and engaging as yours need to be.)
Maybe there are also other routes to monetisation, e.g patreon, ads/sponsorships for videos (maybe from EA orgs), or pitching orgs on videos you could do for them on your channel that you otherwise wouldn’t do.
Thanks a lot for this! I may reply in more detail later but I wanted to send a quick interim note; this is exactly the sort of useful feedback and info I was hoping to elicit with this post!
Reasons for optimism about measuring malevolence to tackle x- and s-risks
I don’t disagree with any specific point in this but somewhat disagree with the overall thrust of the recommendation. I suspect most people could learn more (and more quickly) by trying out more specialised roles, especially in high-quality, established organisations with better mentorship and support networks.
(I’ve never been a uni group organiser so not sure what the mentorship and support networks are actually like; I’m mostly just guessing and extrapolating from my own experience having been a generalist researcher then running a talent search org covering multiple cause areas.)
I don’t feel like I’ve learnt very much that’s very useful over the past year or two. Probably similar amounts to when I was a teacher in a secondary school, and far less than when I was a researcher.
I’m a big fan of these intervention reports. They’re not directly relevant to anything I’m working on right now so I’m only skimming them but they seem high quality to me. I especially appreciate how you both draw on relevant social science external to the movement, and more anecdotal evidence and reasoning specific to animal advocacy.
When you summarise the studies, I’d find it more helpful if you summarised the key evidence rather than their all-things-considered views.
E.g. in the cost-effectiveness section you mention that costs are low, seeming to assume that the effects would be high enough to justify them. I assume this confidence depends on your reading of the external studies. But from what I see here, without clicking on links, my takeaway is currently something like: “oh so some social scientists think they can work”, which doesn’t fill me with much confidence given that I don’t know what their methods were, how clear the findings were, etc.
Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn’t put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don’t have access to recent data or information about how it’s gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana’s time.
Not a criticism of your post or any specific commenter, but I think it’s a shame (for epistemics related reasons) when discussions end up more about “how EA is X” as opposed to “how true is X? How useful is X, and for what?”.
Side comment / nitpick: Animal Advocacy Careers has 13k LinkedIn followers (we prioritised it relatively highly when I was working there) https://www.linkedin.com/company/animal-advocacy-careers/
I was funded with long delays. I wouldn’t have said “straightforwardly unprofessional” communication in my case.
It was a fairly stressful experience, but seemed consistent with “overworked people dealing with a tough legal situation”, both for EVF in general and my specific grant.
I did suggest on their feedback form that misleading language about timeframes on the application form be removed. It looks like they’ve done that now, although I have no idea when the change was made. (In my case this was essentially the only issue; the turnaround wasn’t necessarily super slow in itself—a few months doesn’t seem unreasonable—it’s just that it was much slower than the form suggested it should be.)
I do not know if anything like this.
I agree that “Luke Muehlhauser’s work on early-movement growth and field-building comes closest.” Animal Ethics’ case studies are also helpful for academic fields https://www.animal-ethics.org/establishing-new-field-natural-sciences/
My impression of the academic social movement studies is that a decent chunk is interested in how movements mobilise their resources, recruit, etc, but often more from a theoretical perspective (e.g. why do people do this, given rational choice theory) rather than statistical/empirical. I don’t have a comprehensive knowledge by any means though, so could be wrong.
(I generally think that if you have specific questions in mind like this, you have to either draw qualitative, indirect insights from case studies and adjacent materials, or design a systematic/comparative methodology and do the research!)
From a quick skim, the fellowship seems promising!
(Basing this mostly just off (1) solid application numbers given a launch late last year and (2) positive testimonials.)
Less anecdotal but only indirectly relevant and also hard to distinguish causation from correlation:
Ctrl+f for “Individuals who participate in consumer action are more likely to participate in other forms of activism” here
https://www.sentienceinstitute.org/fair-trade#consumer-action-and-individual-behavioral-change
Unfortunately this was quite a while ago at the last org I worked at; I don’t have access to the relevant spreadsheets, email chains etc anymore and my memory is not the best, so I don’t expect to be able to add much beyond what I wrote in the comment above.