Jamie is Managing Director at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history.
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
Jamie_Harris
Yep, I realise that.
Also feel like a big limitation is that this data comes from asking current orgs. Asking current orgs how many “connectors” they need feels a bit like asking a company how many CEOs they want.
Nonetheless, still an update! E.g. this bit was slightly surprising to me:
Funders of independent researchers we’ve interviewed think that there are plenty of talented applicants, but would prefer more research proposals focused on relatively few existing promising research directions (e.g., Open Phil RFPs, MATS mentors’ agendas), rather than a profusion of speculative new agendas. This leads us to believe that they would also prefer that independent researchers be approaching their work from an Iterator mindset, locating plausible contributions they can make within established paradigms, rather than from a Connector mindset, which would privilege time spent developing novel approaches.
I found this helpful. It updated me towards the importance of finding/supporting iterators, relative to connectors. Thank you!
I agree with the basic point you’re making (I think) and I suspect either:
(1) we disagree about how much you should negatively update, i.e. how bad this data makes bioethicists look
Or
(2) we don’t actually disagree and this is just due to language being messy (or me misinterpreting you)
That all seems fair / I agree.
Thanks a lot for collecting this survey! I think it’s valuable to solicit ‘external’ (to the EA community) views on important questions that affect our decision-making, especially from plausible expert groups.
I’m quite shocked at the vehemence and dismissiveness of many of the comments on this post responding to these results. Here are some quotes from other commenters:
1.
“Preventing a death is equally important irrespective of age” strikes me as a genuinely insane position… No one would be indifferent between extending someone’s life by an hour, even a very valuable hour, and extending another person’s ordinary life by 30 years. But it’s just really strange to endorse that, but not apply the same logic to saving a 20-year old person over a 100-year old person.
2.
Yeah, it’s just transparently stupid stuff like “Each life counts for one and that is why more count for more. For this reason we should give priority to saving as many lives as we can, not as many life-years.” [Caveat, this quote is slightly out of context… it’s actually responding to the comment above.]
3.
Results give some support to the notion that bioethicists are more like PR professionals, geared to reproducing common sentiments rather than a group that is OK with sometimes taking difficult stances. Questions 6 & 7 especially seem like vague left-wing truisms… I still can’t get over 40% thinking being blind would be not disadvantaging if society was “justly designed”.
4.
It’s really pretty shocking to me how badly this makes bioethicists look.
Here are some possible explanations for the supposedly crazy results:
There are reasons, logic, or evidence that are considered or known amongst (some? many?) bioethicists that you yourself are not familiar with.
There are reasons, logic, or evidence that you are familiar with that (some? many?) bioethicists are not.
There are multiple conflicting principles or heuristics that apply in a different case, and the respondents just weigh those differently to you. E.g. this strikes me as likely what’s happening with the “It is most important to prevent someone from dying at which of the following ages” question.
The respondents have different ethical systems and worldviews to you, e.g. placing more weight on virtue ethics relative to consequentialism. That doesn’t make them insane or unthoughtful; ethics is really tough and probably depends on a lot of things like your upbringing. (Otherwise people would have similar ethical views across cultures, which clearly isn’t true.)
I share the intuition that many of the results in the survey seem surprising, and very discrepant from my own views. But regardless of whether you understand the reasons, surely after seeing that a group of people holds substantially different views to your own, your all-things-considered belief should shift at least somewhat towards those views, even if your “independent impression” does not? Especially when that group has years of relevant thought or expertise; these facts make it more likely that there are valid reasons underpinning their beliefs. Where there are discrepancies, there’s a chance that they are right and you/we are wrong.
I’m worried that some of the quotes above represent something like cognitive dissonance or a boomerang effect. Or at least they seem more like “soldier mindset” than I’d expect here, although I note some exceptions where several commenters (including some of those I quoted above) ask others for input on helping to understand and steelman the bioethicists’ views.
[Edit: the following paragraph felt true at the time of writing but I regret writing it as it seems pointlessly offensive/inflammatory itself in hindsight. I apologise to the people I quoted above.] Honestly, seeing the prevalence of these kinds of reactions in the comments makes me feel less confident in the epistemic health of this community and more worried about groupthink type effects. (Maybe some of these commenters have reasons for their vehemence and dismissiveness that I’m missing?)
Very interesting results. They seem surprisingly animal-friendly/considerate to me.
“The survey title and introduction didn’t mention objectives of the survey, except that it is about animal welfare”.
To check, could people see this before deciding whether or not to take the survey? If they could, I’d expect that to skew the participation substantially towards people who already care disproportionately about animals, or think them to be closer in capacity to humans etc.
If so, did you collect any demographic information that wasn’t filled based on quotas and can be compared to wider Dutch demographic data that might give insight into this? E.g. rates of vegetarianism? Or even if you found that survey participation was filled more quickly by women than by men, I’d take that as a relevant indication, given correlations between gender and various measurements relevant to caring about animals.
(Even if these aspects are similar to the wider population, or you only shared that it was about animal welfare later on after people signed up, I’d expect social desirability bias to influence the results somewhat.)
This is the appropriate reaction! I’ve shared estimates from Saulius’ post with people before hoping for a similar one and am disappointed if it doesn’t happen. Feeling things scope-sensitively is hard though.
Oh woops! Apologies for the faff. That worked, thanks!
Hey! Is this Slack still active? I’m getting a notification saying “[!]doesn’t have an account on this workspace” when I click the join link and sign into gmail. Thanks!
Messaged!
In terms of age, I originally advertised as 16-18 but lowered the minimum to 15 because I realised that people in year 11 in the UK (GCSE exams year, deciding which A-Levels to take) might benefit as well. The majority of participants have always been in year 12 (16-17 years old).
For the rest, I have info about these sorts of things in the full doc if you’d like access. Just can’t share it all fully publicly for various reasons (and didn’t want to spend the time required to make a full public version that I’d be happy sharing).
Yep, definitely something I’d consider! I know some people have a strong preference against sharing though so it’d be helpful to still have access to at least some single rooms.
(I also find some people have single rooms with ensuites as a hard requirement for medical or religious reasons).
The flexibility makes it sound more promising and useable
Lessons from two years of talent search pilots
For my org, I can imagine using this if it was 2x the size or more, but I can’t really think of events I’d run that would be worth the effort to organise for 15 people.
(Maybe like 30% chance I’d use it within 2 years if had 30+ bedrooms, less than 10% chance at the actual size.)
Cool idea though!
Yeah many of those things seem right to me.
I suspect the crux might be that I don’t necessarily think it’s a bad thing if “the casual reader of the website doesn’t understand that 80k basically works on AGI”. E.g. if 80k adds value to someone as they go through the career guide, even if they don’ realise that “the organization strongly recommends AI over the rest, or that x-risk gets the lion’s share of organizational resources”, is there a problem?
I would be concerned if 80k was not adding value. E.g. I can imagine more salesly tactics that look like making a big song and dance about how much the reader needs their advice, without providing any actual guidance until they deliver the final pitch, where the reader is basically given the choice of signing up for 80k’s view/service, or looking for some alternative provider/resource that can help them. But I don’t think that that’s happening here.
I can also imagine being concerned if the service was not transparent until you were actually on the call, and then you received some sort of unsolicited cause prioritisation pitch. But again, I don’t think that’s what’s happening; as discussed, it’s pretty transparent on the advising page and cause prio page what they’re doing.
Makes sense on (1). I agree that this kind of methodology is not very externally legible and depends heavily on cause prioritisation, sub-cause prioritisation, your view on the most impactful interventions, etc. I think it’s worth tracking for internal decision-making even if external stakeholders might not agree with all the ratings and decisions. (The system I came up with for Animal Advocacy Careers’ impact evaluation suffered similar issues within animal advocacy.)
For (2), I’m not sure why you don’t think 80 do this. E.g. the page on “What are the most pressing world problems?” has the following opening paragraph:
We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere.
Then the actual ranking is very clear: AI 1, pandemics 2, nuclear war 3, etc.
And the advising page says quite prominently “We’re most helpful for people who… Are interested in the problems we think are most pressing, which you can read about in our problem profiles.” The FAQ on “What are you looking for in the application?” mentions that one criterion is “Are interested in working on our pressing problems”.
Of course it would be possible to make it more prominent, but it seems like they’ve put these things pretty clearly on the front.
It seems pretty reasonable to me that 80k would want to talk to people who seem promising but don’t share all the same cause prio views as them; supporting people to think through cause prio seems like a big way they can add value. So I wouldn’t expect them to try to actively deter people who sign up and seem worth advising but, despite the clear labelling on the advising page, don’t already share the same cause prio rankings as 80k. You also suggest “when people do apply/email, it’s worth making that sort of caveat as well”, and that seems in the active deterrence ballpark to me; to the effect of ‘hey are you sure you want this call?’
Hey Joel, I’m wondering if you have recommendations on (1) or on the transparency/clarity element of (2)?
(Context being that I think 80k do a good job on these things, and I expect I’m doing a less good job on the equivalents in my own talent search org. Having a sense of what an ‘even better’ version might look like could help shift my sort of internal/personal overton window of possibilities.)
What do you think are some of the main differences between your guide/advice and 80k’s?
I realise that to some extent, merely covering similar ideas with a slightly different framing and emphasis can add value because variations in these things land more or less well with different people.
But I’m wondering about more substantive differences. E.g. this page implies that you either don’t endorse longtermism or endorse it less strongly than 80k, and my impression from your content is that you do tend to highlight a broader range of opportunities, including a much more prominent emphasis on global health (and climate change?).
Are then any other differences that jump to mind? E.g. like how Holden Karnofsky’s “aptitudes” post was quite a different take to 80k’s more ‘cause prio first’ approach.
(A more provocative framing of this qu: imagine that Probably Good and 80k both have an article on the same topic. Without reading either, if I do endorse longtermism, is there any reason why (or person for who) the Probably Good article is likely to be more useful?)
Thanks!
Tentative recommendation: try to make the episodes more pointedly about useful, impact-relevant topics. You can preserve the chatty vibe and relatively low-effort prep but still cover important topics.
I just listened to most of the Dwarkesh episode and it seemed notably more useful to me! (And similarly fun/interesting?) I think just because of the topics you broached. E.g. Chana has useful takes on loads of impact-relevant topics but you were talking about quizzes and favourite beans. Whereas with Dwarkesh you were chatting about counterfactuals and lessons from history and career exploration and maximising impact through communications.
So many incredible achievements!