Iâd argue thereâs a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that youâre not the most informed, most capable person to make that decision.
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I donât believe this to be the right assessment for the desirability of option value. I think the more correct question is âwhether the future person/âpeople in power (which may be the opinion of the average human in case of a âsingleton democracyâ) would be more capable than me?â.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/âunknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most peopleâs moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about âant-sufferingâ and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, Iâm unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if thatâs your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I donât think that the probability of âAGI-caused S-riskâ is high. I also donât think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I donât think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
Iâm also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the âprobabilityâ for an âAI misalignment caused s-riskâ as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/âor animals alive âpermanentlyâ to have net negative lives (which most likely would require traveling outside of the solar system). I also put âhow bad the scenario would beâ pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium?
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
Iâm actually leaning more towards the latter. My guess is thereâs a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.
I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I donât believe this to be the right assessment for the desirability of option value. I think the more correct question is âwhether the future person/âpeople in power (which may be the opinion of the average human in case of a âsingleton democracyâ) would be more capable than me?â.
I feel unsure whether my morals will be better or worse than that future person or people because of the following:
The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/âunknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most peopleâs moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about âant-sufferingâ and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, Iâm unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans.
Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if thatâs your main concern, then working on AI-alignment is the best option (both with your and my beliefs).
While I donât think that the probability of âAGI-caused S-riskâ is high. I also donât think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I donât think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).
Iâm also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.
And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):
I put the âprobabilityâ for an âAI misalignment caused s-riskâ as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/âor animals alive âpermanentlyâ to have net negative lives (which most likely would require traveling outside of the solar system). I also put âhow bad the scenario would beâ pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).
I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:
That invertebrates most likely suffer more than they experience pleasure.
It is unclear whether invertebrates suffer or experience pleasure more.
Iâm actually leaning more towards the latter. My guess is thereâs a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more.
So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity).
With all that said, I still think the future is more likely to be net positive than net negative.