Influence on cosmic actors seems not only “plausible” but inevitable to me. Everything we do influences them in expectation, even if extremely indirectly (e.g., anything that directly or indirectly reduces X-risks reduces the likelihood of alien counterfactuals and increases that of interaction between our civilization and alien ones). The real questions seem to be i) how crucial is this influence for evaluating whether the work we do is good or bad; and ii) whether we can predictably influence them (right now, we know we are influencing them; we simply have no idea if this is in a way that makes the future better or worse). I think your first section gives good arguments in favor of answering “plausibly quite crucial” to (i). As for (ii), your fourth section roughly responds “maybe, but we’d yet have to figure out precisely how” which seems fair (although, fwiw, I think I’m more skeptical than you that we’ll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse).
Also, this is unrelated to the point of your post but I think your second section should invite us to reflect on whether longtermists can/should ignore the unpredictable (see, e.g., this recent comment thread and the references therein) since this may be a key—and controversial—assumption behind the objections you respond to.
fwiw, I think I’m more skeptical than you that we’ll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse
I guess there are various aspects that are worth teasing apart there, such as: humanity’s overall influence on other cosmic actors, a given altruistic community’s influence on cosmic actors, individual actions taken (at least partly) with an eye to having a beneficial influence on (or together with) other cosmic actors, and so on. I guess our analyses, our degrees of agnosticism, and our final answers can differ greatly across different questions like these. For example, individual actions might be less difficult to optimize given their smaller scale and given that we have greater control over them (even if they’re still very difficult to predict and optimize in absolute terms).
I also think a lot depends on the meaning of “radical agnosticism” here. A weak interpretation might be something like “we’ll generally be pretty close to 50⁄50, all things considered”. I’d agree that, in terms of long-term influence, that’s likely to be the best we can do for the most part (though I also think it’s an open question, and I don’t see much reason to be firmly convinced of, or committed to, the view that we won’t ever be able to do better).
A stronger interpretation might be something like “we’ll practically always be exactly at — or indistinguishably close to — 50⁄50, all things considered”. That version of radical agnosticism strikes me as too radical. On its face, it can seem like a stance of exemplary modesty, yet on closer examination, I actually think it’s the opposite, namely an extremely strong claim. I mean, it seems to me like a striking “throw a ball in the air and have it land and balance perfectly on a needle” kind of coincidence to end at exactly — or indistinguishably close to — 50⁄50 (or at any other position of complete agnosticism, e.g. even if one rejects precise credences).[1]
For example, I think the point about how we can’t rule out that we might find better, more confident answers in the future (e.g. with the help of new empirical insights, new conceptual frameworks, better AI tools, and so on) is alone a reason not to accept such “strong” radical uncertainty, as this point suggests that further exploration is at least somewhat beneficial in expectation.
For example, if you’ve weighed a set of considerations that point vaguely in one direction, it would seem like quite a coincidence if “unknown considerations” were to exactly cancel out those considerations. I see you’ve discussed whether unknown considerations might be positively correlated with known considerations, but it seems that even zero correlation (which is arguably a defensible prior) would still lead you to go with the conclusion drawn based on the known considerations; you’d seemingly need to assume a (weakly) negative correlation to consistently get back to a position of complete agnosticism.
I mean, it seems to me like a striking “throw a ball in the air and have it land and balance perfectly on a needle” kind of coincidence to end at exactly — or indistinguishably close to — 50⁄50 (or at any other position of complete agnosticism, e.g. even if one rejects precise credences).
I don’t see how this critique applies to imprecise credences. Imprecise credences by definition don’t say “exactly 50⁄50.”
it seems to me like a striking … kind of coincidence to end at exactly — or indistinguishably close to — … any position of complete agnosticism
That is, I think it tends to apply to complete and perfect agnosticism in general, even if one doesn’t frame or formulate things in terms of 50⁄50 or the like. (Edit: But to clarify, I think it’s less striking the less one has thought about a given choice and the less the options under consideration differ in character; so I think there are many situations in which practically complete agnosticism is reasonable.)
Influence on cosmic actors seems not only “plausible” but inevitable to me. Everything we do influences them in expectation, even if extremely indirectly (e.g., anything that directly or indirectly reduces X-risks reduces the likelihood of alien counterfactuals and increases that of interaction between our civilization and alien ones). The real questions seem to be i) how crucial is this influence for evaluating whether the work we do is good or bad; and ii) whether we can predictably influence them (right now, we know we are influencing them; we simply have no idea if this is in a way that makes the future better or worse). I think your first section gives good arguments in favor of answering “plausibly quite crucial” to (i). As for (ii), your fourth section roughly responds “maybe, but we’d yet have to figure out precisely how” which seems fair (although, fwiw, I think I’m more skeptical than you that we’ll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse).
Also, this is unrelated to the point of your post but I think your second section should invite us to reflect on whether longtermists can/should ignore the unpredictable (see, e.g., this recent comment thread and the references therein) since this may be a key—and controversial—assumption behind the objections you respond to.
Thanks for this interesting post :)
Thanks for your comment :)
I guess there are various aspects that are worth teasing apart there, such as: humanity’s overall influence on other cosmic actors, a given altruistic community’s influence on cosmic actors, individual actions taken (at least partly) with an eye to having a beneficial influence on (or together with) other cosmic actors, and so on. I guess our analyses, our degrees of agnosticism, and our final answers can differ greatly across different questions like these. For example, individual actions might be less difficult to optimize given their smaller scale and given that we have greater control over them (even if they’re still very difficult to predict and optimize in absolute terms).
I also think a lot depends on the meaning of “radical agnosticism” here. A weak interpretation might be something like “we’ll generally be pretty close to 50⁄50, all things considered”. I’d agree that, in terms of long-term influence, that’s likely to be the best we can do for the most part (though I also think it’s an open question, and I don’t see much reason to be firmly convinced of, or committed to, the view that we won’t ever be able to do better).
A stronger interpretation might be something like “we’ll practically always be exactly at — or indistinguishably close to — 50⁄50, all things considered”. That version of radical agnosticism strikes me as too radical. On its face, it can seem like a stance of exemplary modesty, yet on closer examination, I actually think it’s the opposite, namely an extremely strong claim. I mean, it seems to me like a striking “throw a ball in the air and have it land and balance perfectly on a needle” kind of coincidence to end at exactly — or indistinguishably close to — 50⁄50 (or at any other position of complete agnosticism, e.g. even if one rejects precise credences).[1]
For example, I think the point about how we can’t rule out that we might find better, more confident answers in the future (e.g. with the help of new empirical insights, new conceptual frameworks, better AI tools, and so on) is alone a reason not to accept such “strong” radical uncertainty, as this point suggests that further exploration is at least somewhat beneficial in expectation.
For example, if you’ve weighed a set of considerations that point vaguely in one direction, it would seem like quite a coincidence if “unknown considerations” were to exactly cancel out those considerations. I see you’ve discussed whether unknown considerations might be positively correlated with known considerations, but it seems that even zero correlation (which is arguably a defensible prior) would still lead you to go with the conclusion drawn based on the known considerations; you’d seemingly need to assume a (weakly) negative correlation to consistently get back to a position of complete agnosticism.
I don’t see how this critique applies to imprecise credences. Imprecise credences by definition don’t say “exactly 50⁄50.”
This is what I meant:
That is, I think it tends to apply to complete and perfect agnosticism in general, even if one doesn’t frame or formulate things in terms of 50⁄50 or the like. (Edit: But to clarify, I think it’s less striking the less one has thought about a given choice and the less the options under consideration differ in character; so I think there are many situations in which practically complete agnosticism is reasonable.)