I’m not sure exactly what follows from this. I’m a bit worried you’re concentrated on the wrong metric—success—when it’s outputs that are more important. Can you explain why you focus on outcomes?
I’m not sure I agree that outputs are more important. I think it depends a lot on the question or decision we’re considering, which is why I highlighted a careful choice of metric as one of the key pieces of advice.
So e.g. if our goal is to set performance incentives (e.g. salaries), then it may be best to reward people for things that are under their control. E.g. pay people more if they work longer hours (inputs), or if there are fewer spelling mistakes in their report (cardinal output metric) or whatever. At other times, paying more attention to inputs or outputs rather than outcomes or things beyond the individual performer’s control may be justified by considerations around e.g. fairness or equality.
All of these things are of course really important to get right within the EA community as well, whether or not we care about them instrumentally or intrinsically. There are lot of tricky and messy questions here.
But if we can say anything general, then I think that especially in EA contexts we care more, ore more often, about outcomes/success/impact on the world, and less about inputs and outputs, than usual. We want to maximize well-being, and from ‘the point of view of the universe’ it doesn’t ultimately matter if someone is happy because someone else produced more outputs or because the same outputs had greater effects. Nor does it ultimately matter if impact differences are due to differences in talent, resource endowments, motivation, luck, or …
Another way to see this is that often actors that care more about inputs or outputs do so because they don’t internalize all the benefits from outcomes. But if a decision is motivated by impartial altruism, there is a sense in which there are no externalities.
Of course, we need to make all the usual caveats against ‘naive consequentialism’. But I do think there is something important in this observation.
I was thinking the emphasis on outputs might be the important part as those are more controllable than outcomes, and so the decision-relevant bit, even though we want to maximise impartial value (outcomes).
I can imagine someone thinking the following way: “we must find and fund the best scientists because they have such outsized outcomes, in terms of citations.” But that might be naive if it’s really just the top scientist who gets the citations and the work of all the good scientists has a more or less equal contribution to impartial value.
I’m not sure I agree that outputs are more important. I think it depends a lot on the question or decision we’re considering, which is why I highlighted a careful choice of metric as one of the key pieces of advice.
So e.g. if our goal is to set performance incentives (e.g. salaries), then it may be best to reward people for things that are under their control. E.g. pay people more if they work longer hours (inputs), or if there are fewer spelling mistakes in their report (cardinal output metric) or whatever. At other times, paying more attention to inputs or outputs rather than outcomes or things beyond the individual performer’s control may be justified by considerations around e.g. fairness or equality.
All of these things are of course really important to get right within the EA community as well, whether or not we care about them instrumentally or intrinsically. There are lot of tricky and messy questions here.
But if we can say anything general, then I think that especially in EA contexts we care more, ore more often, about outcomes/success/impact on the world, and less about inputs and outputs, than usual. We want to maximize well-being, and from ‘the point of view of the universe’ it doesn’t ultimately matter if someone is happy because someone else produced more outputs or because the same outputs had greater effects. Nor does it ultimately matter if impact differences are due to differences in talent, resource endowments, motivation, luck, or …
Another way to see this is that often actors that care more about inputs or outputs do so because they don’t internalize all the benefits from outcomes. But if a decision is motivated by impartial altruism, there is a sense in which there are no externalities.
Of course, we need to make all the usual caveats against ‘naive consequentialism’. But I do think there is something important in this observation.
I was thinking the emphasis on outputs might be the important part as those are more controllable than outcomes, and so the decision-relevant bit, even though we want to maximise impartial value (outcomes).
I can imagine someone thinking the following way: “we must find and fund the best scientists because they have such outsized outcomes, in terms of citations.” But that might be naive if it’s really just the top scientist who gets the citations and the work of all the good scientists has a more or less equal contribution to impartial value.
FWIW, it’s not clear we’re disagreeing!