Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
I didn’t vote on your comment, but I think you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation:
The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
very few to no decision-relevant cases of divergence between “practically SFE” people and others
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
What do you mean? People working specifically to prevent suffering could be called “practically SFE” using the definition here. This includes people working in animal welfare pretty generally, and many of these do not hold principled SFE views. I think there are at least a few people working on s-risks who don’t hold principled SFE views, e.g. some people working at or collaborating with the Center on Long-Term Risk (s-risk-focused AI safety) or Sentience Institute (s-risk-focused moral circle expansion; I think Jacy is a classical utilitarian).
Why is suspecting that there’s other life in the universe relevant? And do you mean the accessible/observable universe?
(I’ve edited this comment a bunch for wording and clarity.)
Thank you (and an anonymous contributor) very much for this!
you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Between “SFE(-ish) people” and “non-SFE people”, indeed.
What do you mean [by “as a result of this deconfusion …”]?
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?
Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
I didn’t vote on your comment, but I think you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation:
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
What do you mean? People working specifically to prevent suffering could be called “practically SFE” using the definition here. This includes people working in animal welfare pretty generally, and many of these do not hold principled SFE views. I think there are at least a few people working on s-risks who don’t hold principled SFE views, e.g. some people working at or collaborating with the Center on Long-Term Risk (s-risk-focused AI safety) or Sentience Institute (s-risk-focused moral circle expansion; I think Jacy is a classical utilitarian).
Why is suspecting that there’s other life in the universe relevant? And do you mean the accessible/observable universe?
(I’ve edited this comment a bunch for wording and clarity.)
Thank you (and an anonymous contributor) very much for this!
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Between “SFE(-ish) people” and “non-SFE people”, indeed.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
Does that make sense?
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?
Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.