Thank you for this response! I think I largely agree with you, and plan to add some (marked) edits as a result. More specifically,
On the 80K problem profile:
I think you are right; they are value-oriented in that they implicitly argue for the targeted approach. I do think they could have make it a little clearer, as much (most?) of the actual work they recommend or list as an example seems to be research-style. The key (and important) exception that I ignored in the post is the “3. Fostering adoption of the best proven techniques in high-impact areas” work they recommend, which I should not have overlooked. (I will edit that part of my post, and likely add a new example of research-level value-neutral IIDM work, like a behavioral science research project.)
“I don’t think the value-neutral version of IIDM is really much of a thing in the EA community”
Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable? The CES example is quite complicated, but I’m not sure that I think it should be disqualified here. (To be clear, however, I do think CES seems to do very valuable work—I’m just not exactly sure how to evaluate it.)
Side note, on “a core tenet of democracy is the idea that one citizen’s values and policy preferences shouldn’t count more than another’s”
I agree that this is key to democracy. However, I do think it is valid to discuss to what extent voter’s values align with actual global good (and I don’t think this opinion is very controversial). For instance, voters might be more nationalistic than one might hope, they might undervalue certain groups’ rights, or they might not value animal or future lives. So I think that, to understand the actual (welfare) impact of an intervention that improves a government’s ability to execute its voters’ aims, we would need to consider more than democratic values. (Does that make sense? I feel like I might have misinterpreted what you were trying to say, a bit, and am not sure that I am explaining myself properly.) On the other hand, it’s possible that good government decision-making is bottlenecked more by its ability to execute its voters’ aims than it is by the voters’ values’ ethical alignment—but I still wish this were more explicitly considered.
“It looks like you’re essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes.”
I think I explained myself poorly in the post, but this is not how I was thinking about it. I agree that the power of an institution is (at least) as important as its decision-making skill (although it does seem likely that these things are quite related), but I viewed IIDM as mostly focused on decision-making and set power aside. If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume of a rectangular prism, not the relevant area of a rectangle. (Note that the visualizing approach in the “A few overwhelmingly harmful institutions” image is another way of drawing volume or a third dimension, I think.) I might add a note along these lines to the post to clarify things a bit.
About “the distinction between stated values and de facto values for institutions”
You’re right, I am very unclear about this (and it’s probably muddled in my head, too). I am basically always trying to talk about the de facto values. For instance, if a finance company whose only aim is to profit also incidentally brings a bunch of value to the world, then I would view it as value-aligned for the purpose of this post. To answer your questions about the typical private health insurance company, “does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm”—it would count as increasing value alignment, not improving decision quality.
Honestly, though, I think this means I should be much more careful about this term, and probably just clearly differentiate between “stated-value-alignment” and “practical-value-alignment.” (These are terrible and clunky terms, but I cannot come up with better ones on the spot.) I think also that my own note about “well-meaning [organizations that] have such bad decision quality that they are actively counterproductive to their aims” clashes with the “value-alignment” framework. I think that there is a good chance that it does not work very well for organizations whose main stated aim is to do good (of some form). I’ll definitely think more about this and try to come back to it.
“The professional world is incredibly siloed, and it’s not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example.”
This is a really good point (and something I did not realize, probably in part due to a lack of background). Would you mind if I added an excerpt from this or a summary to the post?
On your note about”generic-strategy”: Apologies for that, and thank you for pointing it out! I’ll make some edits.
Note: I now realize that I have basically inverted normal comment-response formatting in this response, but I’m too tired to fix it right now. I hope that’s alright!
Once again, thank you for this really detailed comment and all the feedback—I really appreciate it!
Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable?
It does, and I admittedly wrote that part of the comment before fully understanding your argument about classifying the development of general-use decision-making tools as being value-neutral. I agree that there has been a nontrivial focus on developing the science of forecasting and other approaches to probability management within EA circles, for example, and that those would qualify as value-neutral using your definition, so my earlier statement that value-neutral is “not really a thing” in EA was unfair.
If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume of a rectangular prism, not the relevant area of a rectangle.
Yeah, I also thought of suggesting this, but think it’s problematic as well. As you say, power/scope is correlated with decision quality, although more on a long-term time horizon than in the short term and more for some kinds of organizations (corporations, media, certain kinds of nonprofits) than others (foundations, local/regional governments). I think it would be more parsimonious to just replace decision quality with institutional capabilities on the graphs and to frame DQ in the text as a mechanism for increasing the latter, IMHO. (Edited to add: another complication is that the line between institutional capabilities that come from DQ and capabilities that come from value shift is often blurry. For example, a nonprofit could decide to change its mission in such a way that the scope of its impact potential becomes much larger, e.g., by shifting to a wider geographic focus. This would represent a value improvement by EA standards, but it also means that it might open itself up to greater possibilities for scale from being able to access new funders, etc.)
Would you mind if I added an excerpt from this or a summary to the post?
Thank you for this response! I think I largely agree with you, and plan to add some (marked) edits as a result. More specifically,
On the 80K problem profile:
I think you are right; they are value-oriented in that they implicitly argue for the targeted approach. I do think they could have make it a little clearer, as much (most?) of the actual work they recommend or list as an example seems to be research-style. The key (and important) exception that I ignored in the post is the “3. Fostering adoption of the best proven techniques in high-impact areas” work they recommend, which I should not have overlooked. (I will edit that part of my post, and likely add a new example of research-level value-neutral IIDM work, like a behavioral science research project.)
“I don’t think the value-neutral version of IIDM is really much of a thing in the EA community”
Once again, I think I agree, although I think there are some rationality/decision-making projects that are popular but not very targeted or value-oriented. Does that seem reasonable? The CES example is quite complicated, but I’m not sure that I think it should be disqualified here. (To be clear, however, I do think CES seems to do very valuable work—I’m just not exactly sure how to evaluate it.)
Side note, on “a core tenet of democracy is the idea that one citizen’s values and policy preferences shouldn’t count more than another’s”
I agree that this is key to democracy. However, I do think it is valid to discuss to what extent voter’s values align with actual global good (and I don’t think this opinion is very controversial). For instance, voters might be more nationalistic than one might hope, they might undervalue certain groups’ rights, or they might not value animal or future lives. So I think that, to understand the actual (welfare) impact of an intervention that improves a government’s ability to execute its voters’ aims, we would need to consider more than democratic values. (Does that make sense? I feel like I might have misinterpreted what you were trying to say, a bit, and am not sure that I am explaining myself properly.) On the other hand, it’s possible that good government decision-making is bottlenecked more by its ability to execute its voters’ aims than it is by the voters’ values’ ethical alignment—but I still wish this were more explicitly considered.
“It looks like you’re essentially using decision quality as a proxy for institutional power, and then concluding that intentions x capability = outcomes.”
I think I explained myself poorly in the post, but this is not how I was thinking about it. I agree that the power of an institution is (at least) as important as its decision-making skill (although it does seem likely that these things are quite related), but I viewed IIDM as mostly focused on decision-making and set power aside. If I were to draw this out, I would add power/scope of institutions as a third axis or dimension (although I would worry about presenting a false picture of orthogonality between power and decision quality). The impact of an institution would then be related to the relevant volume of a rectangular prism, not the relevant area of a rectangle. (Note that the visualizing approach in the “A few overwhelmingly harmful institutions” image is another way of drawing volume or a third dimension, I think.) I might add a note along these lines to the post to clarify things a bit.
About “the distinction between stated values and de facto values for institutions”
You’re right, I am very unclear about this (and it’s probably muddled in my head, too). I am basically always trying to talk about the de facto values. For instance, if a finance company whose only aim is to profit also incidentally brings a bunch of value to the world, then I would view it as value-aligned for the purpose of this post. To answer your questions about the typical private health insurance company, “does bringing its (non-altruistic) actions into greater alignment with its (altruistic) goals count as improving decision quality or increasing value alignment under your paradigm”—it would count as increasing value alignment, not improving decision quality.
Honestly, though, I think this means I should be much more careful about this term, and probably just clearly differentiate between “stated-value-alignment” and “practical-value-alignment.” (These are terrible and clunky terms, but I cannot come up with better ones on the spot.) I think also that my own note about “well-meaning [organizations that] have such bad decision quality that they are actively counterproductive to their aims” clashes with the “value-alignment” framework. I think that there is a good chance that it does not work very well for organizations whose main stated aim is to do good (of some form). I’ll definitely think more about this and try to come back to it.
“The professional world is incredibly siloed, and it’s not hard at all for me to imagine that ostensibly publicly available resources and tools that anyone could use would, in practice, be distributed through networks that ensure disproportionate adoption by well-intentioned individuals and groups. I believe that something like this is happening with Metaculus, for example.”
This is a really good point (and something I did not realize, probably in part due to a lack of background). Would you mind if I added an excerpt from this or a summary to the post?
On your note about”generic-strategy”: Apologies for that, and thank you for pointing it out! I’ll make some edits.
Note: I now realize that I have basically inverted normal comment-response formatting in this response, but I’m too tired to fix it right now. I hope that’s alright!
Once again, thank you for this really detailed comment and all the feedback—I really appreciate it!
It does, and I admittedly wrote that part of the comment before fully understanding your argument about classifying the development of general-use decision-making tools as being value-neutral. I agree that there has been a nontrivial focus on developing the science of forecasting and other approaches to probability management within EA circles, for example, and that those would qualify as value-neutral using your definition, so my earlier statement that value-neutral is “not really a thing” in EA was unfair.
Yeah, I also thought of suggesting this, but think it’s problematic as well. As you say, power/scope is correlated with decision quality, although more on a long-term time horizon than in the short term and more for some kinds of organizations (corporations, media, certain kinds of nonprofits) than others (foundations, local/regional governments). I think it would be more parsimonious to just replace decision quality with institutional capabilities on the graphs and to frame DQ in the text as a mechanism for increasing the latter, IMHO. (Edited to add: another complication is that the line between institutional capabilities that come from DQ and capabilities that come from value shift is often blurry. For example, a nonprofit could decide to change its mission in such a way that the scope of its impact potential becomes much larger, e.g., by shifting to a wider geographic focus. This would represent a value improvement by EA standards, but it also means that it might open itself up to greater possibilities for scale from being able to access new funders, etc.)
No problem, go ahead!