You don’t engage much with the existing literature on cluelessness. This argument has been discussed:
One could try to argue against the relevance of moral dark matter by saying that its inscrutability means we’re wise to treat it as neutral. Then our actions have two parts: known effects predicted to be beneficial, and a vast bulk of moral dark matter, predicted to be net neutral. Despite the mass of the moral dark matter, if we can treat it as neutral it will not alter our moral calculus and is therefore irrelevant. But this seems a weak, abstract, and unfruitful argument.
Hilary Greaves in effect says that this argument works for what she calls “simple cluelessness” (because we can apply the Principle of Indifference to unforeseeable consequences), whereas “complex cluenessness” is trickier (because “in those cases, no form of indifference principle is at all plausible, and the threat of cluelessness is more genuine”).
Your “debugging model” seems to me to be specific type of capacity-building—and you argue that capacity-building interventions fall prey to cluelessness. If debugging is a type of capacity-building, what is it about debugging that allows it to avoid the problems that you see with other forms of capacity-building? (And if not, in virtue of what is it not a form of capacity-building?)
Her first example of “complex cluelessness” is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I’m not sure it’s a valid distinction. I suspect all cluelessness is complex.
Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the cluelessness argument. Even if we are successful in those efforts and manage to solve the problem, we still cannot predict all the precise long-term consequences. Too much moral dark matter remains. This form of capacity-building cannot stand up to Morgensen and Greaves’ critique, because it doesn’t address the problem they raise.
This debugging model does. Beyond our ability to build capacity to solve specific and known intractable problems, we already and likely always will have capacity to solve problems in general. Unknown unknowns become known, and then we solve them. We keep the good, fix the bad, and develop more wisdom to deal with the ugly.
I’m not planning on engaging further with the cluelessness literature because what I’ve seen makes me think GPI is off track. It strikes me as a combination of sophistry and obscurantism that I find hard to take seriously. This writing was an attempt to get my own thoughts in order. I invite others who find their ideas more compelling to explain why “debugging,” in conjunction with a frank acknowledgement that the future is risky, can’t account for cluelessness.
I’m not planning on engaging further with the cluelessness literature because what I’ve seen makes me think GPI is off track.
I think your dismissal is premature. For one thing, the “debugging” approach you favor has been discussed by Will MacAskill, a Senior Research Fellow at GPI:
If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
For another, the cluelessness literature isn’t exhausted by GPI’s contributions to it, and it includes other, more extensive discussions of your favorite approach, notably by Brian Tomasik:
Focusing on the very robust projects often amounts to punting the hard questions to future generations who will be better equipped to solve them.
I want to give more context for the MacAskill quote.
The most obvious implication [of the Hinge of History hypothesis], however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes.
Here, he is talking about strategies for solving specific problems, X-risks in this case. This is not relevant to the cluelessness argument advanced by Mogensen and that I am addressing. Later in his article, though, he does touch on the topic.
Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.
Buck-passing, or punting, is compatible with the “debugging” concept, but not with Mogensen’s “cluelessness.” With debugging, you deliberate as long as is possible or productive, and then act as wisely as possible. Once you’ve made a decision, you fix side effect problems as they arise, which might include finding ways to reverse the decision where possible. Although some decisions will result in genuine enormous moral disasters, such as slavery or Nazism, this approach appears to me to be both net good and our only choice.
With Mogensen’s cluelessness argument, it doesn’t matter how long you deliberate, because you have to be able to predict the ripple effects and their moral weights into the far future first. Since that’s impossible, you can never know the moral value of an action. We therefore can’t morally prefer one action over another. I’m not strawmanning this argument. It really is that extreme.
Buck-passing/punting also not identical to “debugging.” In buck-passing or punting, we’re deferring a decision on a specific issue to a wiser future. A current ban on genetically engineered human embryos is an example. In debugging, we’re making a decision, and trusting the future to resolve the unexpected difficulties. Climate change is an example: our ancestors created fossil fuel-based industry, and we are dealing with the unexpected consequences.
The reason I don’t feel the need to engage with the cluelessness literature is because, when sensible, it’s simply providing another approach to describing basic problems from economic theory and common sense, which I understand reasonably well and expect I can learn better from those sources. When done badly, it’s a salad of sophistry with a thick and unnecessary dressing of formal logic. I can’t read everything and I think I’ll learn a lot more of value from studying, oh, almost anything else. These writers need to convince me that they’ve produced insights of value if they want me to engage. I’m just describing why they haven’t succeeded in that project so far.
By the way, I appreciate you responding to my post. Although I’m sure you can see I’ve got little patience for Mogensen and the cluelessness literature I’ve seen more generally, I think it’s important to have conversations about it. And it’s always nice to have someone take an interest.
A better alternative is to recognize that our own future selves, and our descendants, will be able to “debug” the unpredictable consequences of the actions we take and systems we create. They can do this by creating sustainable alternatives, building resiliency, and improving their planning and evaluation. They will be motivated by self-interest to do so, and enabled by their increasing knowledge. [emphasis mine]
This point doesn’t hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it’s a pretty crucial thing to note. Indeed I’d suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren’t in a position to save themselves from the negative consequences of our choices.
An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn’t make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn’t ask for, unless I’m misunderstanding your general claim.
You don’t engage much with the existing literature on cluelessness. This argument has been discussed:
Hilary Greaves in effect says that this argument works for what she calls “simple cluelessness” (because we can apply the Principle of Indifference to unforeseeable consequences), whereas “complex cluenessness” is trickier (because “in those cases, no form of indifference principle is at all plausible, and the threat of cluelessness is more genuine”).
Your “debugging model” seems to me to be specific type of capacity-building—and you argue that capacity-building interventions fall prey to cluelessness. If debugging is a type of capacity-building, what is it about debugging that allows it to avoid the problems that you see with other forms of capacity-building? (And if not, in virtue of what is it not a form of capacity-building?)
Her first example of “complex cluelessness” is the same population size argument made by Morgensen, which I dealt with in section 2a. I think both simple and complex cluelessness are dealt with nicely by the debugging model I am proposing. But I’m not sure it’s a valid distinction. I suspect all cluelessness is complex.
Debugging is a form of capacity building, but the distinction I drew is necessary. Sometimes we try to build advance capacity to solve an as-yet-intractable problem, as in AI safety research. This is vulnerable to the cluelessness argument. Even if we are successful in those efforts and manage to solve the problem, we still cannot predict all the precise long-term consequences. Too much moral dark matter remains. This form of capacity-building cannot stand up to Morgensen and Greaves’ critique, because it doesn’t address the problem they raise.
This debugging model does. Beyond our ability to build capacity to solve specific and known intractable problems, we already and likely always will have capacity to solve problems in general. Unknown unknowns become known, and then we solve them. We keep the good, fix the bad, and develop more wisdom to deal with the ugly.
I’m not planning on engaging further with the cluelessness literature because what I’ve seen makes me think GPI is off track. It strikes me as a combination of sophistry and obscurantism that I find hard to take seriously. This writing was an attempt to get my own thoughts in order. I invite others who find their ideas more compelling to explain why “debugging,” in conjunction with a frank acknowledgement that the future is risky, can’t account for cluelessness.
I think your dismissal is premature. For one thing, the “debugging” approach you favor has been discussed by Will MacAskill, a Senior Research Fellow at GPI:
For another, the cluelessness literature isn’t exhausted by GPI’s contributions to it, and it includes other, more extensive discussions of your favorite approach, notably by Brian Tomasik:
I want to give more context for the MacAskill quote.
Here, he is talking about strategies for solving specific problems, X-risks in this case. This is not relevant to the cluelessness argument advanced by Mogensen and that I am addressing. Later in his article, though, he does touch on the topic.
Buck-passing, or punting, is compatible with the “debugging” concept, but not with Mogensen’s “cluelessness.” With debugging, you deliberate as long as is possible or productive, and then act as wisely as possible. Once you’ve made a decision, you fix side effect problems as they arise, which might include finding ways to reverse the decision where possible. Although some decisions will result in genuine enormous moral disasters, such as slavery or Nazism, this approach appears to me to be both net good and our only choice.
With Mogensen’s cluelessness argument, it doesn’t matter how long you deliberate, because you have to be able to predict the ripple effects and their moral weights into the far future first. Since that’s impossible, you can never know the moral value of an action. We therefore can’t morally prefer one action over another. I’m not strawmanning this argument. It really is that extreme.
Buck-passing/punting also not identical to “debugging.” In buck-passing or punting, we’re deferring a decision on a specific issue to a wiser future. A current ban on genetically engineered human embryos is an example. In debugging, we’re making a decision, and trusting the future to resolve the unexpected difficulties. Climate change is an example: our ancestors created fossil fuel-based industry, and we are dealing with the unexpected consequences.
The reason I don’t feel the need to engage with the cluelessness literature is because, when sensible, it’s simply providing another approach to describing basic problems from economic theory and common sense, which I understand reasonably well and expect I can learn better from those sources. When done badly, it’s a salad of sophistry with a thick and unnecessary dressing of formal logic. I can’t read everything and I think I’ll learn a lot more of value from studying, oh, almost anything else. These writers need to convince me that they’ve produced insights of value if they want me to engage. I’m just describing why they haven’t succeeded in that project so far.
By the way, I appreciate you responding to my post. Although I’m sure you can see I’ve got little patience for Mogensen and the cluelessness literature I’ve seen more generally, I think it’s important to have conversations about it. And it’s always nice to have someone take an interest.
I think maintaining a lot of optionality winds up turning into risk aversion in practice.
Can you give a few examples? Having options and avoiding risk are both good things, all else being equal.
This point doesn’t hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it’s a pretty crucial thing to note. Indeed I’d suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren’t in a position to save themselves from the negative consequences of our choices.
An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn’t make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn’t ask for, unless I’m misunderstanding your general claim.