This sounds great and I instinctively really like it. My reservation is when im the research will end up being somewhat biased towards animal welfare, considering that has been a major research focus and passion for most of these researchers for a long time.
My weak suggestion (I know probably not practical) would be to try and intentionally hire some animal welfare skeptic philosophy people to join the team to provide some balance and perhaps fresh perspectives.
My reservation is when im the research will end up being somewhat biased towards animal welfare, considering that has been a major research focus and passion for most of these researchers for a long time.
This seems over-stated to me.
WIT is a cross-cause team in a cross-cause organization. The Animal Moral Weight project is one project we’ve worked on, but all our other subsequent projects (the Cross-Cause Model, the Portfolio Tool, the Moral Parliament Tool, the Digital Consciousness Model, and our OpenAI project on Risk Alignment in Agentic AI) are all not specifically animal related. We’ve also elsewhere proposed work looking at human moral weight and movement building.
You previously suggested that the team who worked on the Moral Weight project was skewed towards people who would be friendly to animals (though the majority of the staff on the team were animal scientists of one kind or another). But all the researchers on the current WIT team (aside from Bob himself) were hired after the completion of the Moral Weight project. In addition, I personally oversee the team and have worked on a variety of cause areas.
Also, regarding interpreting our resource allocation projects: the key animal-related input to these are the moral weight scores. And our tools purposefully give users the option to adjust these in line with their own views themselves.
I might well have overstated it. My argument here though is based on previous work of individual team members, even before they joined RP, not just the nature of the previous work of the team as part of RP. All 5 of the team members worked publicly (googlably) to a greater or lesser extent on animal welfare issues before joining RP, which does seem significant to me when the group undertaking such an important project which involves such important questions assessing impact, prioritisation and funding questions across a variety of causes.
It might be a”cross cause team”, but there does seem a bent here..
1. Animal welfare has been at the center of Derek and Bob’s work for some time.
3. You and Hayley worked (perhaps to a far lesser extent) on animal welfare before joining Rethink too. On Hayley’s faculty profile it says”With her interdisciplinary approach and diverse areas of expertise, she helps us understand both animal minds and our own.”
And yes I agree that you, leading the team seems to have the least work history in this direction.
This is just to explain my reasoning above, I don’t think there’s necessarily intent here and I’m sure the team is fantastic—evidenced by all your high quality work. Only that the team does seem quite animal welfar-ey. I’ve realised this might seem a bit stalky and this was just on a super quick google. This may well be misleading and yes I may well be overstating.
4 out of 5 of the team members worked publically (googlably) to a greater or lesser extent on animal welfare issues even before joining RP
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
This sounds great and I instinctively really like it. My reservation is when im the research will end up being somewhat biased towards animal welfare, considering that has been a major research focus and passion for most of these researchers for a long time.
My weak suggestion (I know probably not practical) would be to try and intentionally hire some animal welfare skeptic philosophy people to join the team to provide some balance and perhaps fresh perspectives.
Thanks Nick!
This seems over-stated to me.
WIT is a cross-cause team in a cross-cause organization. The Animal Moral Weight project is one project we’ve worked on, but all our other subsequent projects (the Cross-Cause Model, the Portfolio Tool, the Moral Parliament Tool, the Digital Consciousness Model, and our OpenAI project on Risk Alignment in Agentic AI) are all not specifically animal related. We’ve also elsewhere proposed work looking at human moral weight and movement building.
You previously suggested that the team who worked on the Moral Weight project was skewed towards people who would be friendly to animals (though the majority of the staff on the team were animal scientists of one kind or another). But all the researchers on the current WIT team (aside from Bob himself) were hired after the completion of the Moral Weight project. In addition, I personally oversee the team and have worked on a variety of cause areas.
Also, regarding interpreting our resource allocation projects: the key animal-related input to these are the moral weight scores. And our tools purposefully give users the option to adjust these in line with their own views themselves.
I might well have overstated it. My argument here though is based on previous work of individual team members, even before they joined RP, not just the nature of the previous work of the team as part of RP. All 5 of the team members worked publicly (googlably) to a greater or lesser extent on animal welfare issues before joining RP, which does seem significant to me when the group undertaking such an important project which involves such important questions assessing impact, prioritisation and funding questions across a variety of causes.
It might be a”cross cause team”, but there does seem a bent here..
1. Animal welfare has been at the center of Derek and Bob’s work for some time.
2. Arvon founded the “Animal welfare library” in 2022 https://www.animalwelfarelibrary.org/about
3. You and Hayley worked (perhaps to a far lesser extent) on animal welfare before joining Rethink too. On Hayley’s faculty profile it says”With her interdisciplinary approach and diverse areas of expertise, she helps us understand both animal minds and our own.”
And yes I agree that you, leading the team seems to have the least work history in this direction.
This is just to explain my reasoning above, I don’t think there’s necessarily intent here and I’m sure the team is fantastic—evidenced by all your high quality work. Only that the team does seem quite animal welfar-ey. I’ve realised this might seem a bit stalky and this was just on a super quick google. This may well be misleading and yes I may well be overstating.
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
This is within RP projects, if we included non-RP academic projects, the proportion of animal projects would be even lower.