I think it’s odd how we have spent so much time burying old chestnuts like “I don’t want to be an EA because I’m a socialist” or “I don’t want to be an EA because I don’t want to earn to give” and yet now we have people saying they are abandoning the community because of some amateur personal theory they’ve come up with on how they can do cause prioritization better than other people.
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy’s familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here. And that’s odd because I’m pretty sure I’ve seen this guy around the discourse for a while.
Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric
This is incorrect anyway. First, even total central planners don’t really need a totalizing metric; actual totalitarian governments have existed and they have not used such a metric (AFAIK).
Second, as long as your actions impact everything, a totalizing metric might be useful. There are non-totalitarian agents whose actions impact everything. In practice though it’s just not really worth the effort to quantify so many things.
so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god
LOL, yes, if we agree and disagree with him in just the right combination of ways to give him an easy counterpunch. Wow, he really got us there!
Second, as long as your actions impact everything, a totalizing metric might be useful.
Wait, is your argument seriously “no one does this so it’s a strawman, and also it makes total sense to do for many practical purposes”? What’s really going on here?
It’s conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.
Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy’s familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.
Some claim to, others don’t.
I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It’s explicitly not scoring all recommendations on a unified metric—I linked to the “Sequence vs Cluster Thinking” post which makes this quite clear—but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled “Utilitarianism,” which is distinguished from other moral philosophies (and in particular from the broader class “consequentialism”) by the claim that you should use a single totalizing metric to assess right action.
OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there’s just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn’t mean they need to be identical.
I think it’s odd how we have spent so much time burying old chestnuts like “I don’t want to be an EA because I’m a socialist” or “I don’t want to be an EA because I don’t want to earn to give” and yet now we have people saying they are abandoning the community because of some amateur personal theory they’ve come up with on how they can do cause prioritization better than other people.
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy’s familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here. And that’s odd because I’m pretty sure I’ve seen this guy around the discourse for a while.
This is incorrect anyway. First, even total central planners don’t really need a totalizing metric; actual totalitarian governments have existed and they have not used such a metric (AFAIK).
Second, as long as your actions impact everything, a totalizing metric might be useful. There are non-totalitarian agents whose actions impact everything. In practice though it’s just not really worth the effort to quantify so many things.
LOL, yes, if we agree and disagree with him in just the right combination of ways to give him an easy counterpunch. Wow, he really got us there!
Wait, is your argument seriously “no one does this so it’s a strawman, and also it makes total sense to do for many practical purposes”? What’s really going on here?
It’s conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.
Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.
Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).
Some claim to, others don’t.
I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It’s explicitly not scoring all recommendations on a unified metric—I linked to the “Sequence vs Cluster Thinking” post which makes this quite clear—but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled “Utilitarianism,” which is distinguished from other moral philosophies (and in particular from the broader class “consequentialism”) by the claim that you should use a single totalizing metric to assess right action.
OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there’s just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn’t mean they need to be identical.