“There is a possibility that it (a more explicit discussion regarding values and prioritization in science) could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with.”
Strongly agree with the risk of backfiring, and I think this is more likely than things going well.
I think if we promoted explicitly value-driven science or discussion of it, the values that drive research priorities are more likely to become ‘social justice values’ than effective altruist values, leading to a focus on unsystematically selected, crowded and intractable cause areas, such as outcome inequalities amongst ethnic groups and sexes in rich English-speaking democracies. This is because these are the values more likely to be held by the people setting research priorities, not effective altruist values. I also think a change in this direction would be very difficult to reverse.
I think a better idea would be to selectively and separately campaign for research priorities to shift in predefined directions (i.e—one campaign for more focus on the problems affecting the global poor, another campaign for future generations and another campaign for animals).
Thanks for your comment! I’m uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it’s a tricky one for sure, and I agree specific targeted advocacy seem less risky.
“There is a possibility that it (a more explicit discussion regarding values and prioritization in science) could backfire if complex questions become politicized and reduced to twitter discussions that in turn makes science policy more political and less tractable to work with.”
Strongly agree with the risk of backfiring, and I think this is more likely than things going well.
I think if we promoted explicitly value-driven science or discussion of it, the values that drive research priorities are more likely to become ‘social justice values’ than effective altruist values, leading to a focus on unsystematically selected, crowded and intractable cause areas, such as outcome inequalities amongst ethnic groups and sexes in rich English-speaking democracies. This is because these are the values more likely to be held by the people setting research priorities, not effective altruist values. I also think a change in this direction would be very difficult to reverse.
I think a better idea would be to selectively and separately campaign for research priorities to shift in predefined directions (i.e—one campaign for more focus on the problems affecting the global poor, another campaign for future generations and another campaign for animals).
Thanks for your comment! I’m uncertain, I think it might depend also in what context the discussion is brought up and with what framing. But it’s a tricky one for sure, and I agree specific targeted advocacy seem less risky.