In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that.
Ah nice, so this could mean two different things:
A.(The ‘canceling out’ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects “cancel each other out” such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/or about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
Some version of B might be the right response in the scenario where we don’t know what else to do anyway? I don’t know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then we’re basically saying that we “got beaten” by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we can’t pretend we’re trying to actually predictably improve the World. We’re not. We’re just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
Your view seems to imply the futility of altruistic endeavour?
If you replace “altruistic endeavour” by “impartial consequentialism”, in the DogvCat case, yes, absolutely. But I didn’t mean to imply that cluelessness in that case generalizes to everything (although I’m also not arguing it doesn’t). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, I’ve only argued that I’d be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that ‘wild guesses’ are often better than agnosticism in short-term geopolitical forecasting means they’re also better when it comes to predicting our overall impact on the long-term future (see ‘Winning isn’t enough’).
I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
Oh interesting, I would have guessed you’d endorse some version of B or come up with a C, instead.
Iirc, these resources I referenced don’t directly address Owen’s points to justify A, though. Not sure. I’ll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work I’m currently doing. Happy to keep you updated if you want.
Ah nice, so this could mean two different things:
A. (The ‘canceling out’ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects “cancel each other out” such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/or about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
I think A is a big epistemic mistake for the reasons given by, e.g., Lenman 2000; Greaves 2016; Tarsney et al 2024, §3.
Some version of B might be the right response in the scenario where we don’t know what else to do anyway? I don’t know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then we’re basically saying that we “got beaten” by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we can’t pretend we’re trying to actually predictably improve the World. We’re not. We’re just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
If you replace “altruistic endeavour” by “impartial consequentialism”, in the DogvCat case, yes, absolutely. But I didn’t mean to imply that cluelessness in that case generalizes to everything (although I’m also not arguing it doesn’t). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, I’ve only argued that I’d be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that ‘wild guesses’ are often better than agnosticism in short-term geopolitical forecasting means they’re also better when it comes to predicting our overall impact on the long-term future (see ‘Winning isn’t enough’).
I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
Oh interesting, I would have guessed you’d endorse some version of B or come up with a C, instead.
Iirc, these resources I referenced don’t directly address Owen’s points to justify A, though. Not sure. I’ll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work I’m currently doing. Happy to keep you updated if you want.
yeah sure, lmk what you find out!