I don’t buy the argument that AI safety is in some way responsible for dangerous AI capabilities. Even if the concept of AI safety had never been raised I’m pretty sure we would still have had AI orgs pop up.
Also yes it is possible that working on AI Safety could limit AI and be a catastrophe in terms of lost welfare, but I still think AI safety work is net positive in expectation given the Bostrom astronomical waste argument and genuine concerns about AI risk from experts.
The key point here is that cluelessness doesn’t arise just because we can think of ways an intervention could be both good and bad—it arises when we really struggle to weigh these competing effects. In the case of AI safety, I don’t struggle to weigh them.
Expanding moral circle for me would be expanding to anything that is sentient or has the capacity for welfare.
As for investing for the future, you can probably mitigate those risks. Again though my point stands that, even if that is a legitimate worry, I can try to weigh that risk against the benefit. I personally feel fine in determining that, overall, investing funds for future use that are ‘promised for altruistic purposes’ seems net positive in expectation. We can debate that point of course, but that’s my assessment.
I think at this point we can amicably disagree, though I’m curious why you think the ‘more people = more animals exploited’ philosophy applies to people in Africa, but not in the future. One might hope that we learn to do better, but it seems like that hope could be applied to and criticised in either scenario.
I do worry about future animal suffering. It’s partly for that reason that I’m less concerned about reducing risks of extinction than I am about reducing other existential risks that will result in large amounts of suffering in the future. This informed some of my choices of interventions for which I am ‘not clueless about’. E.g.
Technical AI alignment / AI governance and coordination research: it has been suggested that misaligned AI could be a significant s-risk.
Expanding our moral circle: relevance to future suffering should be obvious.
Global priorities research: this just seems robustly good as how can increasing moral understanding be bad?
Research into consciousness: seems really important in light of the potential risk of future digital minds suffering.
Research into improving mental health: improving mental health has intrinsic worth and I don’t see a clear link to increasing future suffering (in fact I lean towards thinking happier people/societies are less likely to act in morally outrageous ways).
I do lean towards thinking reducing extinction risk is net positive in expectation too, but I am quite uncertain about this and I don’t let it motivate my personal altruistic choices.
I don’t buy the argument that AI safety is in some way responsible for dangerous AI capabilities. Even if the concept of AI safety had never been raised I’m pretty sure we would still have had AI orgs pop up.
Also yes it is possible that working on AI Safety could limit AI and be a catastrophe in terms of lost welfare, but I still think AI safety work is net positive in expectation given the Bostrom astronomical waste argument and genuine concerns about AI risk from experts.
The key point here is that cluelessness doesn’t arise just because we can think of ways an intervention could be both good and bad—it arises when we really struggle to weigh these competing effects. In the case of AI safety, I don’t struggle to weigh them.
Expanding moral circle for me would be expanding to anything that is sentient or has the capacity for welfare.
As for investing for the future, you can probably mitigate those risks. Again though my point stands that, even if that is a legitimate worry, I can try to weigh that risk against the benefit. I personally feel fine in determining that, overall, investing funds for future use that are ‘promised for altruistic purposes’ seems net positive in expectation. We can debate that point of course, but that’s my assessment.
I think at this point we can amicably disagree, though I’m curious why you think the ‘more people = more animals exploited’ philosophy applies to people in Africa, but not in the future. One might hope that we learn to do better, but it seems like that hope could be applied to and criticised in either scenario.
I do worry about future animal suffering. It’s partly for that reason that I’m less concerned about reducing risks of extinction than I am about reducing other existential risks that will result in large amounts of suffering in the future. This informed some of my choices of interventions for which I am ‘not clueless about’. E.g.
Technical AI alignment / AI governance and coordination research: it has been suggested that misaligned AI could be a significant s-risk.
Expanding our moral circle: relevance to future suffering should be obvious.
Global priorities research: this just seems robustly good as how can increasing moral understanding be bad?
Research into consciousness: seems really important in light of the potential risk of future digital minds suffering.
Research into improving mental health: improving mental health has intrinsic worth and I don’t see a clear link to increasing future suffering (in fact I lean towards thinking happier people/societies are less likely to act in morally outrageous ways).
I do lean towards thinking reducing extinction risk is net positive in expectation too, but I am quite uncertain about this and I don’t let it motivate my personal altruistic choices.