I’ve been trying to explain this to people for a while, usually appealing to some examples from game theory, but this is a really clear and useful way of framing it. It should’ve occurred to me when reading Bostom’s Infinite Ethics.
This is a bit of a tangent, but one problem I’ve been encountering when applying the types of decision procedures that you suggest at the end is that in certain systems (I’ve heard them called anti-inductive systems), they need to be parameterized on the state of the system. That seems to be a step that is cognitively hard to come up with or even to follow for some people – a common pitfall. So my hypothesis is that it should be a crucial part of our messaging in this context to make this distinction clear, and first we may, as a movement, need to understand it better too.
A person might, for example, act according to the categorical imperative hoping to maximize utility. Intuitively they might look for proxies such as “Donate to the charity with the lowest overhead” or better “Donate to the charity that affects the greatest number of sentient beings.” These don’t decrease due to the person’s donating to the charity. But these heuristics will fall short unless something is added in like “donate to the charity that is also most neglected,” something that is decreased by the donation itself. All the newspaper critiques of EA attest to how unintuitive this parameterization seems to be to people: “Hurr durr, EAs think buying bednets is best, but that’s silly; if everyone did that, we’d die of treatable cancer, and what would we do with all those bednets anyway?”
When unparameterized messages are published on purpose, e.g., to save readers the time to check the state of the system, then they need to be updated regularly the way GiveWell and ACE do. But even GiveWell and ACE are not really promoting the unparameterized versions a lot but rather the different, parameterized version, “Donate to whatever we recommend.” They’re sort of like a cache for the parameterized function and invalidate the cache once per year.
Within the movement, meat offsetting is one place were the messaging often falls short in this fashion. Usually the part of the message, “But check in with us yearly to get your updated offsetting price” is missing. Get a million people to offset their consumption, and their offsetting will become a lot more expensive. But unless they check back every year, they’ll fail to notice.
80k is probably the next level here, since its advice is parameterized not only on the state of the system but also on the person seeking its advice. I wonder what the third level is.
Maybe EA is unusual among social movements in that it is based on parameterized messages, and maybe that’s something that makes it less accessible for people who are used to other social movements.
I’ve been trying to explain this to people for a while, usually appealing to some examples from game theory, but this is a really clear and useful way of framing it. It should’ve occurred to me when reading Bostom’s Infinite Ethics.
This is a bit of a tangent, but one problem I’ve been encountering when applying the types of decision procedures that you suggest at the end is that in certain systems (I’ve heard them called anti-inductive systems), they need to be parameterized on the state of the system. That seems to be a step that is cognitively hard to come up with or even to follow for some people – a common pitfall. So my hypothesis is that it should be a crucial part of our messaging in this context to make this distinction clear, and first we may, as a movement, need to understand it better too.
A person might, for example, act according to the categorical imperative hoping to maximize utility. Intuitively they might look for proxies such as “Donate to the charity with the lowest overhead” or better “Donate to the charity that affects the greatest number of sentient beings.” These don’t decrease due to the person’s donating to the charity. But these heuristics will fall short unless something is added in like “donate to the charity that is also most neglected,” something that is decreased by the donation itself. All the newspaper critiques of EA attest to how unintuitive this parameterization seems to be to people: “Hurr durr, EAs think buying bednets is best, but that’s silly; if everyone did that, we’d die of treatable cancer, and what would we do with all those bednets anyway?”
When unparameterized messages are published on purpose, e.g., to save readers the time to check the state of the system, then they need to be updated regularly the way GiveWell and ACE do. But even GiveWell and ACE are not really promoting the unparameterized versions a lot but rather the different, parameterized version, “Donate to whatever we recommend.” They’re sort of like a cache for the parameterized function and invalidate the cache once per year.
Within the movement, meat offsetting is one place were the messaging often falls short in this fashion. Usually the part of the message, “But check in with us yearly to get your updated offsetting price” is missing. Get a million people to offset their consumption, and their offsetting will become a lot more expensive. But unless they check back every year, they’ll fail to notice.
80k is probably the next level here, since its advice is parameterized not only on the state of the system but also on the person seeking its advice. I wonder what the third level is.
Maybe EA is unusual among social movements in that it is based on parameterized messages, and maybe that’s something that makes it less accessible for people who are used to other social movements.