- I’m somewhat skeptical of the value of option value, because it assumes that humans will do the right thing.
I’d argue there’s a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that you’re not the most informed, most capable person to make that decision.
To intuition pump, this is a (good imo) reason why doctors recommend young people wait before getting a vasectomy. That person is able to use other forms of contraception whilst they hand that decision to someone that might be better informed (i.e. themselves in 10 years time).
Because of physical ageing, we don’t often encounter option value in our personal lives. It’s pretty common for choices available to us to be close to equal on option value. But this isn’t the case when we are taking decisions that have implications on the long term future.
But I do think that the likelihood and scale of certain “lock-in” net negative futures, could potentially make working on s-risk directly or indirectly more impactful.
To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.
______________
I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium?
I’d argue there’s a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that you’re not the most informed, most capable person to make that decision.
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I don’t believe this to be the right assessment for the desirability of option value. I think the more correct question is “whether the future person/people in power (which may be the opinion of the average human in case of a “singleton democracy”) would be more capable than me?”.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/unknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most people’s moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about “ant-suffering” and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, I’m unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if that’s your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I don’t think that the probability of “AGI-caused S-risk” is high. I also don’t think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I don’t think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
I’m also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the “probability” for an “AI misalignment caused s-risk” as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/or animals alive “permanently” to have net negative lives (which most likely would require traveling outside of the solar system). I also put “how bad the scenario would be” pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium?
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
I’m actually leaning more towards the latter. My guess is there’s a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.
I’d argue there’s a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that you’re not the most informed, most capable person to make that decision.
To intuition pump, this is a (good imo) reason why doctors recommend young people wait before getting a vasectomy. That person is able to use other forms of contraception whilst they hand that decision to someone that might be better informed (i.e. themselves in 10 years time).
Because of physical ageing, we don’t often encounter option value in our personal lives. It’s pretty common for choices available to us to be close to equal on option value. But this isn’t the case when we are taking decisions that have implications on the long term future.
To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.
______________
I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium?
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I don’t believe this to be the right assessment for the desirability of option value. I think the more correct question is “whether the future person/people in power (which may be the opinion of the average human in case of a “singleton democracy”) would be more capable than me?”.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/unknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most people’s moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about “ant-suffering” and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, I’m unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if that’s your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I don’t think that the probability of “AGI-caused S-risk” is high. I also don’t think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I don’t think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
I’m also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the “probability” for an “AI misalignment caused s-risk” as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/or animals alive “permanently” to have net negative lives (which most likely would require traveling outside of the solar system). I also put “how bad the scenario would be” pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
I’m actually leaning more towards the latter. My guess is there’s a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.