Surely we should have nonzero credence, and maybe even >10% that there arenât any crucial considerations we are missing that are on the scale of âconsider nonhumansâ or âconsider future generationsâ. In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that. Which could still move us slightly away from pure agnosticism?
Your view seems to imply the futility of altruistic endeavour? Which of course doesnât mean it is incorrect, just seems like an important implication.
In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that.
Ah nice, so this could mean two different things:
A.(The âcanceling outâ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects âcancel each other outâ such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/âor about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
Some version of B might be the right response in the scenario where we donât know what else to do anyway? I donât know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then weâre basically saying that we âgot beatenâ by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we canât pretend weâre trying to actually predictably improve the World. Weâre not. Weâre just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
Your view seems to imply the futility of altruistic endeavour?
If you replace âaltruistic endeavourâ by âimpartial consequentialismâ, in the DogvCat case, yes, absolutely. But I didnât mean to imply that cluelessness in that case generalizes to everything (although Iâm also not arguing it doesnât). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, Iâve only argued that Iâd be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that âwild guessesâ are often better than agnosticism in short-term geopolitical forecasting means theyâre also better when it comes to predicting our overall impact on the long-term future (see âWinning isnât enoughâ).
I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
Oh interesting, I would have guessed youâd endorse some version of B or come up with a C, instead.
Iirc, these resources I referenced donât directly address Owenâs points to justify A, though. Not sure. Iâll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work Iâm currently doing. Happy to keep you updated if you want.
Surely we should have nonzero credence, and maybe even >10% that there arenât any crucial considerations we are missing that are on the scale of âconsider nonhumansâ or âconsider future generationsâ. In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that. Which could still move us slightly away from pure agnosticism?
Your view seems to imply the futility of altruistic endeavour? Which of course doesnât mean it is incorrect, just seems like an important implication.
Ah nice, so this could mean two different things:
A. (The âcanceling outâ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects âcancel each other outâ such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/âor about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
I think A is a big epistemic mistake for the reasons given by, e.g., Lenman 2000; Greaves 2016; Tarsney et al 2024, §3.
Some version of B might be the right response in the scenario where we donât know what else to do anyway? I donât know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then weâre basically saying that we âgot beatenâ by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we canât pretend weâre trying to actually predictably improve the World. Weâre not. Weâre just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
If you replace âaltruistic endeavourâ by âimpartial consequentialismâ, in the DogvCat case, yes, absolutely. But I didnât mean to imply that cluelessness in that case generalizes to everything (although Iâm also not arguing it doesnât). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, Iâve only argued that Iâd be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that âwild guessesâ are often better than agnosticism in short-term geopolitical forecasting means theyâre also better when it comes to predicting our overall impact on the long-term future (see âWinning isnât enoughâ).
I think I am quite sympathetic to A, and to the things Owen wrote in the other branch, especially about operationalizing imprecise credences. But this is sufficiently interesting and important-seeming that I am noting to read later some of the references you give to justify A being false.
Oh interesting, I would have guessed youâd endorse some version of B or come up with a C, instead.
Iirc, these resources I referenced donât directly address Owenâs points to justify A, though. Not sure. Iâll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work Iâm currently doing. Happy to keep you updated if you want.
yeah sure, lmk what you find out!