We already do it internally, and surfacing that allows us to see what we already think. Though I agree that we can treat made up numbers too seriously. A number isn’t more accurate than “high likely” it’s more precise. They can both just as easily be mistaken.
I have time for people who say that quantifying feels exhausting or arrogant but I think those are the costs, to be weighed against the precision of using numbers.
As I wrote in another post, I support the use of numbers, but it’s clear to me that some EAs think that quantifying something automatically reduces uncertainty / bias / motivated reasoning.
Agree that the main benefit of quantification is precision, not accuracy.
Precision is only sometimes warranted though. For the same reason that in science we never report numbers to a higher precision than that we can actually measure, it is misleading to quantify things when you actually have no idea what the numbers are.
In the self-evaluation of their mistakes, the Intelligence community in the US came to the conclusion that lack of quantification of the likelihood that Saddam didn’t have WMDs was one of the reasons they messed up.
This led to forecasting tournaments which inturn lead to Tetlock’s superforcasting. I think the orthodox view in EA is that Tetlock’s work is valuable and we should apply its insights.
You can only numerically compare things that are linearly ordered. 2 is obviously bigger than 1. But you cannot say which of two lists of random numbers is bigger than the other, you would have to define your own set of comparison rules out of possibly infinite rules to compare them and say which is bigger.
Suppose you have 2 apples, one is bigger and has more calories, but the other has higher nutrient density. How do you declare which is the better apple? At best the answer is contextual and at worst it’s impossible to solve. You cannot recommend someone eat one apple over another without making qualitative decisions about their health. Even if you tried to put a number on their health, the problem would recursively cascade down to infinity. They may need more energy to get through the day and so should go for the bigger apple, but why prefer short-term energy over long-term nutritional health? Why prefer the short term over the long term in general? These are necessarily qualitative decisions that utilitarianism cannot solve, because you mathematically cannot decide between non-linearly ordered objects without forming some subjective, qualitative philosophy first.
So when we say ‘you can’t put a number on everything’, it isn’t just a platitude, it’s a fact of the universe, and denying that is like denying gravity.
I don’t understand this comment. People assign a number to consumer choices all the time, for example via the process of buying and selling things.
Now you can say prices are imperfect because of distributional concerns. But that is a specific tactical issue. Even after complete wealth redistribution, I expect market allocation of apples to be better than qualitative-philosophy-based allocation. But maybe this is a strawman and you’re only comparing two forms of technocratic distribution from on high (which is closer to EA decisions, chickens don’t participate in markets about their welfare)? But even then numerical reasoning just seems much better for allocation than non-numerical ones. Specifically I would guess the distribution to look like markets > technocratic shadow markets > AI technocracy with ML optimized for preference elicitation > humans trying to do technocracy with numbers > humans trying to do technocracy without numbers.
So when we say ‘you can’t put a number on everything’, it isn’t just a platitude, it’s a fact of the universe, and denying that is like denying gravity.
This might just be my lack of physics knowledge speaking, but I think the ability to quantify the world is much more native to my experience than gravity is. Certainly it’s easier to imagine a universe without gravity than a universe where it’s impossible to assign numbers to some things.
(I think it’s reasonably likely I’m missing something, since this comment has upvotes and agreement and after several rereadings I still don’t get it).
I think this is a nitpick. In context, it’s not like I am arguing for saying “GiveWell is 6” I am arguing that “I’m 90% sure that GiveWell does fantastic work” is a reasonable thing to say. That provides room for a roughly linear ordering.
These are necessarily qualitative decisions that utilitarianism cannot solve, because you mathematically cannot decide between non-linearly ordered objects without forming some subjective, qualitative philosophy first.
Wait… don’t all consequentialist normative ethical theories have a subjective qualitative philosophy to them? - namely what they hold to be valuable (otherwise I’m not sure what you mean by “subjective qualitative” here at all. A Google search gives me nothing for “subjective qualitative philosophy”).
Utilitarianism values happiness so whichever apple consumption leads to more happiness and well-being is recommended
Mohism values state welfare so whichever apple consumption leads leads to better state welfare is recommended
In Christian Situational Ethics values love so whichever apple consumption leads to more love in the world is recommended
Intellectualism values knowledge whichever apple consumption leads to more knowledge is recommended
Welfarism values economic well-being whichever apple consumption leads to more economic well-being or welfare is recommended
Preference utilitarianism values preference satisfaction whichever apple consumption leads to the most overall preference satisfaction is recommended
Utilitarianism is not and never has been just putting numbers on things. The numbers used are just instrumental to the end-goal of increasing whatever is valued. You might say “you can’t put a number on happiness” to which I say we have proxies and the numerical value of said proxies (e.g. calories and nutrient density[1]), when clearly reasoned on with available evidence, are useful to give us a clearer picture of what actions lead more to the end-goal of happiness maximization.
So when we say ‘you can’t put a number on everything’, it isn’t just a platitude, it’s a fact of the universe, and denying that is like denying gravity.
I kinda wanna push back here against what feels like a bizarre caricature stereotype of what it must mean to be a Utilitarian. You can be a diehard Utilitarian—live and abide by it—and do zero math, zero scary numbers, zero quantitative reasoning your whole life. All you do is vigorously try to increase happiness based on whatever qualitative reasoning you have to the best of your abilities. That and I suppose iterate on empirical evidence—which doesn’t have to include using numbers.
Useful numbers placed on virtually all food items—which like most EA numbers are estimations. But they are still nonetheless useful if said numbers can be reasonably interpreted as good proxies or correlated with what you value. i.e. they imperfectly provide us roughly linear ordering.
Putting numbers on things is good.
We already do it internally, and surfacing that allows us to see what we already think. Though I agree that we can treat made up numbers too seriously. A number isn’t more accurate than “high likely” it’s more precise. They can both just as easily be mistaken.
I have time for people who say that quantifying feels exhausting or arrogant but I think those are the costs, to be weighed against the precision of using numbers.
As I wrote in another post, I support the use of numbers, but it’s clear to me that some EAs think that quantifying something automatically reduces uncertainty / bias / motivated reasoning.
Agree that the main benefit of quantification is precision, not accuracy.
Precision is only sometimes warranted though. For the same reason that in science we never report numbers to a higher precision than that we can actually measure, it is misleading to quantify things when you actually have no idea what the numbers are.
I disagree. I think words are often just as bad for this. So it’s not the fault of quantification but an issue with communication in general.
Good point!
In the self-evaluation of their mistakes, the Intelligence community in the US came to the conclusion that lack of quantification of the likelihood that Saddam didn’t have WMDs was one of the reasons they messed up.
This led to forecasting tournaments which inturn lead to Tetlock’s superforcasting. I think the orthodox view in EA is that Tetlock’s work is valuable and we should apply its insights.
Precisely!
The downvotes to comments like this are also bad practice IMO, separate from every other cultural practice raised in the discussion.
You can only numerically compare things that are linearly ordered. 2 is obviously bigger than 1. But you cannot say which of two lists of random numbers is bigger than the other, you would have to define your own set of comparison rules out of possibly infinite rules to compare them and say which is bigger.
Suppose you have 2 apples, one is bigger and has more calories, but the other has higher nutrient density. How do you declare which is the better apple? At best the answer is contextual and at worst it’s impossible to solve. You cannot recommend someone eat one apple over another without making qualitative decisions about their health. Even if you tried to put a number on their health, the problem would recursively cascade down to infinity. They may need more energy to get through the day and so should go for the bigger apple, but why prefer short-term energy over long-term nutritional health? Why prefer the short term over the long term in general? These are necessarily qualitative decisions that utilitarianism cannot solve, because you mathematically cannot decide between non-linearly ordered objects without forming some subjective, qualitative philosophy first.
So when we say ‘you can’t put a number on everything’, it isn’t just a platitude, it’s a fact of the universe, and denying that is like denying gravity.
I don’t understand this comment. People assign a number to consumer choices all the time, for example via the process of buying and selling things.
Now you can say prices are imperfect because of distributional concerns. But that is a specific tactical issue. Even after complete wealth redistribution, I expect market allocation of apples to be better than qualitative-philosophy-based allocation. But maybe this is a strawman and you’re only comparing two forms of technocratic distribution from on high (which is closer to EA decisions, chickens don’t participate in markets about their welfare)? But even then numerical reasoning just seems much better for allocation than non-numerical ones. Specifically I would guess the distribution to look like markets > technocratic shadow markets > AI technocracy with ML optimized for preference elicitation > humans trying to do technocracy with numbers > humans trying to do technocracy without numbers.
This might just be my lack of physics knowledge speaking, but I think the ability to quantify the world is much more native to my experience than gravity is. Certainly it’s easier to imagine a universe without gravity than a universe where it’s impossible to assign numbers to some things.
(I think it’s reasonably likely I’m missing something, since this comment has upvotes and agreement and after several rereadings I still don’t get it).
I think this is a nitpick. In context, it’s not like I am arguing for saying “GiveWell is 6” I am arguing that “I’m 90% sure that GiveWell does fantastic work” is a reasonable thing to say. That provides room for a roughly linear ordering.
Wait… don’t all consequentialist normative ethical theories have a subjective qualitative philosophy to them? - namely what they hold to be valuable (otherwise I’m not sure what you mean by “subjective qualitative” here at all. A Google search gives me nothing for “subjective qualitative philosophy”).
Utilitarianism values happiness so whichever apple consumption leads to more happiness and well-being is recommended
Mohism values state welfare so whichever apple consumption leads leads to better state welfare is recommended
In Christian Situational Ethics values love so whichever apple consumption leads to more love in the world is recommended
Intellectualism values knowledge whichever apple consumption leads to more knowledge is recommended
Welfarism values economic well-being whichever apple consumption leads to more economic well-being or welfare is recommended
Preference utilitarianism values preference satisfaction whichever apple consumption leads to the most overall preference satisfaction is recommended
Utilitarianism is not and never has been just putting numbers on things. The numbers used are just instrumental to the end-goal of increasing whatever is valued. You might say “you can’t put a number on happiness” to which I say we have proxies and the numerical value of said proxies (e.g. calories and nutrient density[1]), when clearly reasoned on with available evidence, are useful to give us a clearer picture of what actions lead more to the end-goal of happiness maximization.
I kinda wanna push back here against what feels like a bizarre caricature stereotype of what it must mean to be a Utilitarian. You can be a diehard Utilitarian—live and abide by it—and do zero math, zero scary numbers, zero quantitative reasoning your whole life. All you do is vigorously try to increase happiness based on whatever qualitative reasoning you have to the best of your abilities. That and I suppose iterate on empirical evidence—which doesn’t have to include using numbers.
Useful numbers placed on virtually all food items—which like most EA numbers are estimations. But they are still nonetheless useful if said numbers can be reasonably interpreted as good proxies or correlated with what you value. i.e. they imperfectly provide us roughly linear ordering.