My reservation is when im the research will end up being somewhat biased towards animal welfare, considering that has been a major research focus and passion for most of these researchers for a long time.
This seems over-stated to me.
WIT is a cross-cause team in a cross-cause organization. The Animal Moral Weight project is one project we’ve worked on, but all our other subsequent projects (the Cross-Cause Model, the Portfolio Tool, the Moral Parliament Tool, the Digital Consciousness Model, and our OpenAI project on Risk Alignment in Agentic AI) are all not specifically animal related. We’ve also elsewhere proposed work looking at human moral weight and movement building.
You previously suggested that the team who worked on the Moral Weight project was skewed towards people who would be friendly to animals (though the majority of the staff on the team were animal scientists of one kind or another). But all the researchers on the current WIT team (aside from Bob himself) were hired after the completion of the Moral Weight project. In addition, I personally oversee the team and have worked on a variety of cause areas.
Also, regarding interpreting our resource allocation projects: the key animal-related input to these are the moral weight scores. And our tools purposefully give users the option to adjust these in line with their own views themselves.
I might well have overstated it. My argument here though is based on previous work of individual team members, even before they joined RP, not just the nature of the previous work of the team as part of RP. All 5 of the team members worked publicly (googlably) to a greater or lesser extent on animal welfare issues before joining RP, which does seem significant to me when the group undertaking such an important project which involves such important questions assessing impact, prioritisation and funding questions across a variety of causes.
It might be a”cross cause team”, but there does seem a bent here..
1. Animal welfare has been at the center of Derek and Bob’s work for some time.
3. You and Hayley worked (perhaps to a far lesser extent) on animal welfare before joining Rethink too. On Hayley’s faculty profile it says”With her interdisciplinary approach and diverse areas of expertise, she helps us understand both animal minds and our own.”
And yes I agree that you, leading the team seems to have the least work history in this direction.
This is just to explain my reasoning above, I don’t think there’s necessarily intent here and I’m sure the team is fantastic—evidenced by all your high quality work. Only that the team does seem quite animal welfar-ey. I’ve realised this might seem a bit stalky and this was just on a super quick google. This may well be misleading and yes I may well be overstating.
4 out of 5 of the team members worked publically (googlably) to a greater or lesser extent on animal welfare issues even before joining RP
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
It’s not that they’ve just worked on animal welfare. It’s that they they have been animal rights advocates (which is great). Derek was the Web developer for the humane League for 5 years… Which is fantastic and I love it but towards my point...
Thanks for the clarification. I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
I still think most independent people who would come in with a more balanced (less “cherry picking”) approach than mine to look at the teams’ work histories are likely to find the teams’ work history to be at least moderately bent towards animals.
I also agree you as the leader aren’t in that category.
I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
Thanks for clarifying!
Re. being biased in favour of animal welfare relative to other causes: I feel at least moderately confident that this is not the case. As the person overseeing the team I would be very concerned if I thought this was the case. But it doesn’t match my experience of the team being equally happy to work on other cause areas, which is why we spent significant time proposing work across cause areas, and being primarily interested in addressing fundamental questions about how we can best allocate resources.[1]
I am much more sympathetic to the second concern I outlined (which you say is not your concern): we might not be biased in favour of one cause area against another, but we still might lack people on both extremes of all key debates. Both of us seem to agree this is probably inevitable (one reason: EA is heavily skewed towards people who endorse certain positions, as we have argued here, which is a reason to be sceptical of our conclusions and probe the implications of different assumptions).[2]
Some broader points:
I think that it’s more productive to focus on evaluating our substantive arguments (to see if they are correct or incorrect) than trying to identify markers of potential latent bias.
Our resource allocation work is deliberately framed in terms of open frameworks which allow people to explore the implications of their own assumptions.
That said, I don’t think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).
Thanks Nick!
This seems over-stated to me.
WIT is a cross-cause team in a cross-cause organization. The Animal Moral Weight project is one project we’ve worked on, but all our other subsequent projects (the Cross-Cause Model, the Portfolio Tool, the Moral Parliament Tool, the Digital Consciousness Model, and our OpenAI project on Risk Alignment in Agentic AI) are all not specifically animal related. We’ve also elsewhere proposed work looking at human moral weight and movement building.
You previously suggested that the team who worked on the Moral Weight project was skewed towards people who would be friendly to animals (though the majority of the staff on the team were animal scientists of one kind or another). But all the researchers on the current WIT team (aside from Bob himself) were hired after the completion of the Moral Weight project. In addition, I personally oversee the team and have worked on a variety of cause areas.
Also, regarding interpreting our resource allocation projects: the key animal-related input to these are the moral weight scores. And our tools purposefully give users the option to adjust these in line with their own views themselves.
I might well have overstated it. My argument here though is based on previous work of individual team members, even before they joined RP, not just the nature of the previous work of the team as part of RP. All 5 of the team members worked publicly (googlably) to a greater or lesser extent on animal welfare issues before joining RP, which does seem significant to me when the group undertaking such an important project which involves such important questions assessing impact, prioritisation and funding questions across a variety of causes.
It might be a”cross cause team”, but there does seem a bent here..
1. Animal welfare has been at the center of Derek and Bob’s work for some time.
2. Arvon founded the “Animal welfare library” in 2022 https://www.animalwelfarelibrary.org/about
3. You and Hayley worked (perhaps to a far lesser extent) on animal welfare before joining Rethink too. On Hayley’s faculty profile it says”With her interdisciplinary approach and diverse areas of expertise, she helps us understand both animal minds and our own.”
And yes I agree that you, leading the team seems to have the least work history in this direction.
This is just to explain my reasoning above, I don’t think there’s necessarily intent here and I’m sure the team is fantastic—evidenced by all your high quality work. Only that the team does seem quite animal welfar-ey. I’ve realised this might seem a bit stalky and this was just on a super quick google. This may well be misleading and yes I may well be overstating.
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
This is within RP projects, if we included non-RP academic projects, the proportion of animal projects would be even lower.
It’s not that they’ve just worked on animal welfare. It’s that they they have been animal rights advocates (which is great). Derek was the Web developer for the humane League for 5 years… Which is fantastic and I love it but towards my point...
Thanks for the clarification. I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
I still think most independent people who would come in with a more balanced (less “cherry picking”) approach than mine to look at the teams’ work histories are likely to find the teams’ work history to be at least moderately bent towards animals.
I also agree you as the leader aren’t in that category.
Thanks for clarifying!
Re. being biased in favour of animal welfare relative to other causes: I feel at least moderately confident that this is not the case. As the person overseeing the team I would be very concerned if I thought this was the case. But it doesn’t match my experience of the team being equally happy to work on other cause areas, which is why we spent significant time proposing work across cause areas, and being primarily interested in addressing fundamental questions about how we can best allocate resources.[1]
I am much more sympathetic to the second concern I outlined (which you say is not your concern): we might not be biased in favour of one cause area against another, but we still might lack people on both extremes of all key debates. Both of us seem to agree this is probably inevitable (one reason: EA is heavily skewed towards people who endorse certain positions, as we have argued here, which is a reason to be sceptical of our conclusions and probe the implications of different assumptions).[2]
Some broader points:
I think that it’s more productive to focus on evaluating our substantive arguments (to see if they are correct or incorrect) than trying to identify markers of potential latent bias.
Our resource allocation work is deliberately framed in terms of open frameworks which allow people to explore the implications of their own assumptions.
And if the members of the team wanted to work solely on animal causes (in a different position), I think they’d all be well-placed to do so.
That said, I don’t think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).