I strongly encourage you/everyone to not call “practical SFE” SFE. It’s much better (analytically) to distinguish the value of causing happiness and preventing suffering from empirical considerations. Under your definition, if (say) utilitarianism is true, then SFE is true given certain empirical circumstances but not others. This is an undesirable definition. Anything called SFE should contain a suffering-focused ranking of possible worlds (for a SF theory of the good) or ranking of possible actions (for a SF theory of the right), not merely a contingent decision procedure. Otherwise the fact that someone accepts SFE is nearly meaningless; it does not imply that they would be willing to sacrifice happiness to prevent suffering, that they should be particularly concerned with S-risks, etc.
Practical SFE views . . . are compatible with a vast range of ethical theories. To adopt a practical SFE view, one just needs to believe that suffering has a particularly high practical priority.
This makes SFE describe the options available to us, rather than how to choose between those options. That is not what an ethical theory does. We could come up with a different term to describe the practical importance of preventing suffering at the margin, but I don’t think it would be very useful: given an ethical theory, we should compare different specific possibilities rather than saying “preventing suffering tends to be higher-leverage now, so let’s just focus on that.” That is, “practical SFE” (roughly defined as the thesis that the best currently-available actions in our universe generally decrease expected suffering much more than they increase expected happiness) has quite weak implications: it does not imply that the best thing we can do involves preventing suffering; to get that implication, we would need to have the truth of “practical SFE” be a feature of each agent (and the options available to them) rather than the universe.
Edit: there are multiple suffering-related ethical questions we could ask. One is “what ought we—humans in 2021 in our particular circumstances—to do?” Another is “what is good, and what is right?” The second question is more general (we can plug empirical facts into an answer to the second to get an answer to the first), more important, and more interesting, so I want an ethical theory to answer it.
I strongly agree with this. I’ve had lots of frustrating conversations with SFE-sympathetic people that slide back and forth between ethical and empirical claims about the world, and I think it’s quite important to carefully distinguish between the two.
The whole “practical SFE” thing also seems to contradict this statement early in the OP:
1.2 Does SFE assume that there is more suffering than happiness in most people’s lives?
No, SFE’s core claim is that reducing suffering is more morally important than increasing happiness. This normative claim does not hinge on the empirical quantity of suffering and happiness in most people’s lives.
This may be true for some diehard suffering-focused EAs, but in my practical experience many people adduce arguments like this to explain why they are sympathetic to SFE. This is quite frustrating, since AFAICT (and perhaps the author would agree) these contingent facts have absolutely no bearing on whether e.g. total utilitarianism is true.
I think prioritarianism and sufficientarianism are particularly likely to prioritize suffering, though, and being able to talk about this is useful, but maybe we should just say they are more suffering-focused than classical utilitarianism, not that they are suffering-focused or practical SFE.
Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
I didn’t vote on your comment, but I think you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation:
The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
very few to no decision-relevant cases of divergence between “practically SFE” people and others
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
What do you mean? People working specifically to prevent suffering could be called “practically SFE” using the definition here. This includes people working in animal welfare pretty generally, and many of these do not hold principled SFE views. I think there are at least a few people working on s-risks who don’t hold principled SFE views, e.g. some people working at or collaborating with the Center on Long-Term Risk (s-risk-focused AI safety) or Sentience Institute (s-risk-focused moral circle expansion; I think Jacy is a classical utilitarian).
Why is suspecting that there’s other life in the universe relevant? And do you mean the accessible/observable universe?
(I’ve edited this comment a bunch for wording and clarity.)
Thank you (and an anonymous contributor) very much for this!
you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
Between “SFE(-ish) people” and “non-SFE people”, indeed.
What do you mean [by “as a result of this deconfusion …”]?
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?
I strongly encourage you/everyone to not call “practical SFE” SFE. It’s much better (analytically) to distinguish the value of causing happiness and preventing suffering from empirical considerations. Under your definition, if (say) utilitarianism is true, then SFE is true given certain empirical circumstances but not others. This is an undesirable definition. Anything called SFE should contain a suffering-focused ranking of possible worlds (for a SF theory of the good) or ranking of possible actions (for a SF theory of the right), not merely a contingent decision procedure. Otherwise the fact that someone accepts SFE is nearly meaningless; it does not imply that they would be willing to sacrifice happiness to prevent suffering, that they should be particularly concerned with S-risks, etc.
This makes SFE describe the options available to us, rather than how to choose between those options. That is not what an ethical theory does. We could come up with a different term to describe the practical importance of preventing suffering at the margin, but I don’t think it would be very useful: given an ethical theory, we should compare different specific possibilities rather than saying “preventing suffering tends to be higher-leverage now, so let’s just focus on that.” That is, “practical SFE” (roughly defined as the thesis that the best currently-available actions in our universe generally decrease expected suffering much more than they increase expected happiness) has quite weak implications: it does not imply that the best thing we can do involves preventing suffering; to get that implication, we would need to have the truth of “practical SFE” be a feature of each agent (and the options available to them) rather than the universe.
Edit: there are multiple suffering-related ethical questions we could ask. One is “what ought we—humans in 2021 in our particular circumstances—to do?” Another is “what is good, and what is right?” The second question is more general (we can plug empirical facts into an answer to the second to get an answer to the first), more important, and more interesting, so I want an ethical theory to answer it.
I strongly agree with this. I’ve had lots of frustrating conversations with SFE-sympathetic people that slide back and forth between ethical and empirical claims about the world, and I think it’s quite important to carefully distinguish between the two.
The whole “practical SFE” thing also seems to contradict this statement early in the OP:
This may be true for some diehard suffering-focused EAs, but in my practical experience many people adduce arguments like this to explain why they are sympathetic to SFE. This is quite frustrating, since AFAICT (and perhaps the author would agree) these contingent facts have absolutely no bearing on whether e.g. total utilitarianism is true.
Yeah, “downside-focused” is probably a better term for this.
I think prioritarianism and sufficientarianism are particularly likely to prioritize suffering, though, and being able to talk about this is useful, but maybe we should just say they are more suffering-focused than classical utilitarianism, not that they are suffering-focused or practical SFE.
Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino’s or Vinding’s stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine. I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth.
Also, as a result of this deconfusion, I would expect there to be very few to no decision-relevant cases of divergence between “practically SFE” people and others, if all of them subscribe to some form of longtermism or suspect that there’s other life in the universe.
I didn’t vote on your comment, but I think you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation:
What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?
Do you mean between “practically SFE” people and people who are neither “practically SFE” nor SFE?
What do you mean? People working specifically to prevent suffering could be called “practically SFE” using the definition here. This includes people working in animal welfare pretty generally, and many of these do not hold principled SFE views. I think there are at least a few people working on s-risks who don’t hold principled SFE views, e.g. some people working at or collaborating with the Center on Long-Term Risk (s-risk-focused AI safety) or Sentience Institute (s-risk-focused moral circle expansion; I think Jacy is a classical utilitarian).
Why is suspecting that there’s other life in the universe relevant? And do you mean the accessible/observable universe?
(I’ve edited this comment a bunch for wording and clarity.)
Thank you (and an anonymous contributor) very much for this!
If that’s what’s causing downvotes in and of itself, I would want to caution people against it—that’s how we end up in a bubble.
E.g. in his book on SFE, Vinding regularly cites people’s subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.
Between “SFE(-ish) people” and “non-SFE people”, indeed.
I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don’t see a need to appeal to normative theories.
Does that make sense?
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
I have two interpretations of what you mean:
What should he have done before getting to normative conclusions? Or do you mean he shouldn’t discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
Since you say “see below” and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn’t both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?
Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.