Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Co-founder, executive director and project lead at Oxford Biosecurity Group
Creator of Pandemic Interventions Course—a biosecurity interventions intro syllabus
All opinions I post on the forum are my own. Tell me why I’m wrong.
Thanks Julia, and great list that you’ve put together (I wasn’t previously aware of it). This post also links some other biosecurity reading lists/syllabuses in the ‘Why create a new biosecurity syllabus’, in case you think any of those should be added as well.
Thanks Max!
Thanks for compiling this! I skimmed this, and it was a good way of getting an overview of what is happening in parts of EA that I know less about. I found having it separated by cause then by month useful so the reader can choose which overview they prefer, although some non-AI causes could have had their own section rather than being clumped together (I slowly scrolled through month by month and clicked on some of the more interesting looking articles).
Exciting! A few thoughts/questions:
I’m not sure quantitatively how much of a difference it makes having a giving pledge focusing just on healthcare professionals, rather than overall, but it’s possible that the focus/community aspect may make some difference. But the concept was interesting enough to get me (past medical device engineer, now health economics) to click on the post, then the website, so there’s that.
When did this actually start/how many people have taken the pledge? On the Pledge page there are 15 people listed ranging from October 2022 to December 2023. It’s possibly not all the people that have signed up, and/or the pledge might not have been open yet, but my first thought was that it looks like not many people have signed up/this isn’t that big. But if the launch is now (i.e. given the post) that makes more sense.
I see a comment below that 1% seems a bit low. Healthcare salaries do vary a lot between countries (e.g. a doctor in the US typically earns a lot more than a doctor in the UK, and also the rate that people are taxed will be different between countries as well). If healthcare workers in the countries you are targeting typically donate more than 1% I think that could take away a lot of the impact from the pledge (people don’t donate more, they might even feel justified to donate less). On the other hand I can see the argument that 10% is high/potentially offputting, so 5% might be a good middle ground. How did you choose which percentage to set as the pledge amount?
Hey, thanks also for the detailed response.
I don’t think that part is our disagreement. Maybe the way I would phrase the question is whether there should be an additional multiplier put on extinction in addition to the expected future loss of wellbeing. If I was to model it, the answer would be ‘no’ to avoid double counting (i.e. the effect of extinction is the effect of future loss of wellbeing). The disagreement is how this is not by default assumed to apply to animals as well.
“If you knew for sure that the animals had net negative lives, would you still think their extinction was bad?” Not sure how likely such a situation is to come up, as I’m not sure how I would know this for sure. Because that seems like not just being sure that every of that species that exists now has a net negative life, it’s assuming that every of that species that might exist in the future also will have. But to answer the question philosophically and not practically, I would not say that the extinction of a species that will definitely have guaranteed suffering is bad.
“But we were discussing whether we should treat animal extinctions with the weight of an X-risk (i.e. a human extinction). For that, we need a little more than an assumption that the animal’s lives are net positive.” Definitely agreed for prioritising between things that more than the just the assumption of net positive is required. But research would be required to know that, and as far as I can tell there has been very little done (and there are ~8.7 million animal species).
I see thanks—I can now find the section you were referring to. I don’t think I agree the full argument as made follows, but I haven’t made the full thing and I don’t want this to be a thread discussing this one particular paper!
Agreed there are nuances re animals. However, outside philosophy I’m not sure how many people you’d have arguing against ‘human extinction is bad even if humans are replaced by another species’!
My bad, I meant primarily the second paragraph (referring to how animal extinction was valued, and given the lack of discussions around this—I had it more general then decided to specify the paragraph… then picked the wrong one!). Agreed with your response here, will edit the original.
I think there should be more discussions of animal x-risk and/or animal longtermism within EA. I definitely care more about human extinction than animal extinction and think human extinction risk should be a higher priority, but that does not mean that animal extinction risk should not be considered. For example, I think considering both human and animal xrisk might change how much climate change is prioritised as an existential risk, and what interventions are worth focusing on for it (there are definitely more people focussing on climate change compared to other xrisks outside of EA, but that does not mean that the best interventions have been focussed on similar to the global poverty space).
I don’t have strong opinions on ‘this should definitely be prioritised’, but I think at least a few people should be researching this (to see if it is important/tractable/neglected/etc), and it should be discussed more than it currently is.
Thank you for this answer. I am not sure I agree with this, for the reasons outlined below (in case useful information for you, I upvoted and disagree-voted this):
Paragraph 1: Somewhat minor point, I think you may be drawing a distinction without a difference, i.e. extinction being bad because of the effects of it (no future human happiness, flourishing etc) is putting a disvalue on extinction, because it inherently causes those effects.
In an animal context I would put this as: if an animal species has net positive lives, then extinction is inherently bad, and where there is uncertainty, an animal species should be assumed to have net positive lives. I do not have strong thoughts of how certain animal species trade off against each other (I would have to do further research), but my prior is that this is the direction of the sign even if by a small amount, in part because assuming the opposite by default could be used to justify negative things. I agree factory farmed animals likely have net negative lives, but I cannot see e.g. the species of chickens, i.e. any chicken ever, having a net negative life. Beyond that there is also the biodiversity loss and downstream effects of that (e.g. biodiversity loss being linked with increased risk of natural pandemics), and the fact that human’s track record with interfering with nature is not great, but I will leave the argument as equivalent to how you phrased it initially.
Paragraph 2: I summarised how I think of extinction of animals in the last paragraph. The quote “if we think extinction in itself is bad, then we should prefer a planet filled with the only first amoebas to one like ours” given the arguments surrounding it in your answer was one that surprised me. Regardless of whether that is true (‘amoeba’ did not come up when I did ctrl-f of the document linked, so I would have to read it in more detail to try to find the context), it does not seem very actionable. Given the current world we live in (i.e. a world filled with many species that aren’t amoebas), saying that people who care about minimizing extinction want to return to that state (given a lot of extinction required to do so) would need further justification. So it seems like less of a relevant comment.
Paragraph 3: As I think the happiness/future value argument can still apply to animals, I still think animal extinction should be seen as bad (although it is less bad than human extinction). I’ve already mentioned that I think the baseline assumption is that species of animals have positive lives unless their is evidence that they don’t. Without data, I would not update on or make decisions based on ‘one animal going extinct may increase the number of animals’. I can see that being possible, particularly in the short term, but I could also see that having negative biodiversity and downstream effects as well.
Overall: My main stance on this animal x-risk should be researched or included in the conversation more than it currently is, even if the outcome of that research ends up being ‘this should still not be prioritized’. I can see this potentially changing some calculations e.g. how much climate change should be prioritised as a cause area, and which particular interventions for climate change (there are more people working on this than other xrisks, but I would expect that like global poverty there will be neglected areas). After all, climate change is a big extinction risk for animals.
A note: I think ‘we’ in the first (EDIT: second) paragraph implies more consensus than the actually is, given that I have not really heard this discussed much at all.
Thank you for writing this. I have not read the original paper, but I think the points here are very plausible and aren’t discussed enough.
Clicking through the original link, it looks like the paper was initially written in 2021. Is there a particular reason for prioritizing summarizing this paper now?
That’s good to hear re in favour of efforts to make EA better (edited for clarity). Thanks for your engagement on this.
Agreed with the necessity for awareness around power dynamics with the nuance of fixing this not having to fall on the people impacted by it. I found it good to see that post when it came out as it points out things people may not have been aware of.
I believe you are conflating several things here. But first, a little tip on phrasing responses: putting the word ‘just’ in front of a critical response makes it more dismissive than you might have intended.
If you think the movement has serious flaws that make it not a good means for doing the most good, then you should not be trying to work for an EA org in the first place, and the access to those opportunities is irrelevant.
Agreed to that as stated, but I think this is a straw man. Things can both be bad in some ways, and better than some other options, but that doesn’t mean any flaws should be dismissed. This could even go to the extreme of (hypothetically) ‘I know I can have the highest impact if I work here, so I will bear the inappropriate attention of my colleagues/will leave and not have the highest impact I can’.
People should not be using the movement for career advancement independent of the goal of doing the most good they can do with their careers (and in most cases, can’t do that even if they intend to, because EA org jobs that are high-status within the movement are not similarly high-status outside of it) [..] I find the EA movement a useful source of ideas and a useful place to find potential collaborators for some of my projects, but I have no interest in working for an EA org because that’s not where I expect I’d have the biggest impact.
Some people may think that working at an EA org is the highest impact thing they could be doing (even if just for the short term), and career paths are very dependent on the individual. EA basically brands itself as the way to do the most good, so it should not be surprising if people hold this view. I was writing up my first comment it was with the broad assumption of ‘connections/opportunities within EA = connections/opportunities that help you do the most good’ (given the EA forum audience), not as a judgement of ‘EA is the only way of having a high impact’ (which is a different conversation).
I think the movement as a whole would be more successful, and a lot of younger EAs would be a lot happier, if they approached the movement with this level of detachment.
I also have thoughts on this one, but this again is a different conversation. EA is not the only way to have a very high impact, but this should not be used as an excuse for avoiding improvements.
Thanks for your response!
I don’t think changing “some EAs” to “we” necessarily changes my point of ‘people concerned should not have to move to a different community which may have fewer resources/opportunities’, independent of who actually creates that different community.
Note that my bigger point overall was why the second bullet point set off alarm bells, rather than specific points on the others (mostly included as a reference, and less thought put into the wording). That said:
there are probably people considering joining EA who would find EA a much easier place to get funding than their other best opportunities for trying to do the kind of good they think most needs doing.
I agree with this. I added “although may reduce future opportunities if they would benefit a lot from getting more involved in EA” after “i.e. someone considering joining EA does not have as much if anything already invested in it” a couple of minutes after originally posting my comment to reflect a very similar sentiment (however likely after you had already seen and started writing your response).
However, there is very much a difference between losing something that you have, and not gaining something that you could potentially have. When talking about personal cost, one is significantly higher than the other (agreed that both are bad), as is the toll of potentially broken trust and losing close relationships. It could potentially also have an impact cost ignoring social factors,e.g. if people have built up career/social capital that is very useful within EA, but not ranked as highly outside of EA/is not linked with the relevant people outside of EA, rather e.g. than building up non-EA networks.
That bullet point is also written as ‘someone considering joining’ rather than ‘we should’. ‘Someone considering joining’ may or may not join for a variety of reasons, and is a potential consequence to the community but not an action point. It is the action points/how action is approached that seem more relevant here.
I am pretty certain it wasn’t intended that way but:
Some EAs should start an unaffiliated group (“Impact Maximizers”) that tries to avoid these problems. (Somewhat like the “Atheism Plus” split.)
Set off minor alarm bells when reading it, more so than the other bullet points, so I tried to put some thought into why that is (and why I didn’t get the same alarm bells for the other two points).
I think it’s because it (most likely inadvertently) implies “If people already in the movement do not like these power dynamics (around making women feel uncomfortable, up to sexual harrassment etc) then they should leave and start their own movement.”(I am aware this asks for some people, not necessarily women/the specific person concerned by this, to start the group, but this still does not address the potentially lower resources, career and networking opportunities). This can almost be used as an excuse not to fix things, as if people don’t like it they can leave. But, leaving means potentially sacrificing close relationships and career and funding opportunities, at least to some degree. Taken together, this could be taken to mean:
If you are a woman uncomfortable about the current norms on dealing with sexual harrassment, consider leaving/starting your own movement, taking potential career and funding hits to do so.
I fully don’t think you intended this, but please take this as my attempt to put words to why this set off minor alarm bells on first reading, and I would be interested to hear the thoughts of others. (It is also possible that that bullet point was in response to a previous comment, which I may not have read in enough depth).
The first and third bullet point do not have this same issue, as the first one does not explicitly reduce existing opportunities for people (i.e. someone considering joining EA does not have as much if anything already invested in it, although may reduce future opportunities if they would benefit a lot from getting more involved in EA), and the third bullet point speaks about making improvements.
If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.
Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.
They’d have the information of upvotes and downvotes already (to calculate the overall karma). I don’t know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: “This comment has x overall karma, (y upvotes and z downvotes).” So the user interface/experience would not change much (unless I have misinterpreted what you meant there).
It’ll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree.
Current information:
How many votes
How valuable these voters found it adjusted by their karma/overall Forum contribution
New potential information:
How many votes
How valuable these voters found it adjusted by their karma/overall Forum contribution
How many overall voters found this valuable
e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing).
I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.
This does not give a complete picture though.
Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that’s not the only option:
1 user upvotes (value +9), 4 users downvote (each value −1)
2 users upvote (values +4 and +6), 3 users downvote (values −1, −1 and −3)
3 users upvote (values +1 and +2 and +10), 2 users downvote (values −1 and −7)
Or a whole range of other permutations one can think of that add up to 5, given that different users’ votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?
The bar should not be at ‘difficult financial situation’, and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Proportion of students who are employed while studying: UK: survey of 10,000 students showed that 56% of full-time UK undergraduates had paid employment (14.5 hours/week average) - June 2024 Guardian article https://www.theguardian.com/education/article/2024/jun/13/more-than-half-of-uk-students-working-long-hours-in-paid-jobs USA: 43% of full-time students work while enrolled in college—January 2023 Fortune article https://fortune.com/2023/01/11/college-students-with-jobs-20-percent-less-likely-to-graduate-than-privileged-peers-study-side-hustle/
Why are students taking on paid work? UK: “Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family.” From the Guardian article linked above. Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I’ve seen a lot of people drop off committees—some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).