Thank you (and an anonymous contributor) very much for this!
you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Between “SFE(-ish) people” and “non-SFE people”, indeed.
What do you mean [by “as a result of this deconfusion …”]?
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?
Thank you (and an anonymous contributor) very much for this!
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Between “SFE(-ish) people” and “non-SFE people”, indeed.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
Does that make sense?
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?