4 out of 5 of the team members worked publically (googlably) to a greater or lesser extent on animal welfare issues even before joining RP
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
It’s not that they’ve just worked on animal welfare. It’s that they they have been animal rights advocates (which is great). Derek was the Web developer for the humane League for 5 years… Which is fantastic and I love it but towards my point...
Thanks for the clarification. I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
I still think most independent people who would come in with a more balanced (less “cherry picking”) approach than mine to look at the teams’ work histories are likely to find the teams’ work history to be at least moderately bent towards animals.
I also agree you as the leader aren’t in that category.
I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
Thanks for clarifying!
Re. being biased in favour of animal welfare relative to other causes: I feel at least moderately confident that this is not the case. As the person overseeing the team I would be very concerned if I thought this was the case. But it doesn’t match my experience of the team being equally happy to work on other cause areas, which is why we spent significant time proposing work across cause areas, and being primarily interested in addressing fundamental questions about how we can best allocate resources.[1]
I am much more sympathetic to the second concern I outlined (which you say is not your concern): we might not be biased in favour of one cause area against another, but we still might lack people on both extremes of all key debates. Both of us seem to agree this is probably inevitable (one reason: EA is heavily skewed towards people who endorse certain positions, as we have argued here, which is a reason to be sceptical of our conclusions and probe the implications of different assumptions).[2]
Some broader points:
I think that it’s more productive to focus on evaluating our substantive arguments (to see if they are correct or incorrect) than trying to identify markers of potential latent bias.
Our resource allocation work is deliberately framed in terms of open frameworks which allow people to explore the implications of their own assumptions.
That said, I don’t think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).
I think this risks being misleading, because the team have also worked on many non-animal related topics. And it’s not surprising that they have, because AW is one of the key cause areas of EA, just as it’s not surprising they’ve worked on other core EA areas. So pointing out that the team have worked on animal-related topics seems like cherry-picking, when you could equally well point to work in other areas as evidence of bias in those directions.
For example, Derek has worked on animal topics, but also digital consciousness, with philosophy of mind being a unifying theme.
I can give a more detailed response regarding my own work specifically, since I track all my projects directly. In the last 3 years, 112⁄124 (90.3%)[1] of the projects I’ve worked on personally have been EA Meta / Longtermist related, with <10% animal related. But I think it would be a mistake to conclude from this that I’m longtermist-biased, even though that constitutes a larger proportion of my work.
Edit: I realise an alternative way to cash out your concern might not be in terms of bias towards animals relative to other cause areas, but rather than we should have people on both sides of all the key cause areas or key debates (e.g. we should have people on both extreme of being pro- and anti- animal, pro- and anti-AI, pro- and anti- GHD, and also presumably on other key questions like suffering focus etc).
If so, then I agree this would be desirable as an ideal, but (as you suggest) impractical (and perhaps undesirable) to achieve in a small team.
This is within RP projects, if we included non-RP academic projects, the proportion of animal projects would be even lower.
It’s not that they’ve just worked on animal welfare. It’s that they they have been animal rights advocates (which is great). Derek was the Web developer for the humane League for 5 years… Which is fantastic and I love it but towards my point...
Thanks for the clarification. I was indeed trying to say option a—that There’s a “bias towards animals relative to other cause areas,” . Yes I agree it would be ideal to have people on different sides of debates in these kind of teams but that’s often impractical and not my point here.
I still think most independent people who would come in with a more balanced (less “cherry picking”) approach than mine to look at the teams’ work histories are likely to find the teams’ work history to be at least moderately bent towards animals.
I also agree you as the leader aren’t in that category.
Thanks for clarifying!
Re. being biased in favour of animal welfare relative to other causes: I feel at least moderately confident that this is not the case. As the person overseeing the team I would be very concerned if I thought this was the case. But it doesn’t match my experience of the team being equally happy to work on other cause areas, which is why we spent significant time proposing work across cause areas, and being primarily interested in addressing fundamental questions about how we can best allocate resources.[1]
I am much more sympathetic to the second concern I outlined (which you say is not your concern): we might not be biased in favour of one cause area against another, but we still might lack people on both extremes of all key debates. Both of us seem to agree this is probably inevitable (one reason: EA is heavily skewed towards people who endorse certain positions, as we have argued here, which is a reason to be sceptical of our conclusions and probe the implications of different assumptions).[2]
Some broader points:
I think that it’s more productive to focus on evaluating our substantive arguments (to see if they are correct or incorrect) than trying to identify markers of potential latent bias.
Our resource allocation work is deliberately framed in terms of open frameworks which allow people to explore the implications of their own assumptions.
And if the members of the team wanted to work solely on animal causes (in a different position), I think they’d all be well-placed to do so.
That said, I don’t think we do too badly here, even in the context of AW specifically, e.g. Bob Fischer has previously published on hierarchicalism, the view that humans matter more than other animals).