This is interesting, if a bit blithe about the concept of “moral importance”. What does it mean? In particular, I think the potential upside and the potential downside are often asymmetric in a way that makes “importance” tricky. For example, most people’s donations might be low impact (upside or downside) and donations can have high upside but rarely high downside. So would you consider moral importance to be defined primarily by the potential upside? How does potential downside (e.g. from career choice if your career accelerates AI capabilities risk) factor in?
A note that this is my subjective perspective, not an objective perspective on morality.
Basically, if you’re concerned about everyday moral decisions, you are almost certainly too worried and should stop worrying.
Focus on the big decisions for a moral life, like career choices.
You are ultimately defined morally by a few big actions, not everyday choices.
This means quite a bit of research to find opportunities, and mostly disregard your intuition.
On downside risk, a few interventions are almost certainly very bad, and cutting out the most toxic incentives in your life is enough. Don’t fret about everyday evils.
Sorry, I don’t understand the trickiness in ’[...] in a way that makes “importance” tricky’
In my mind, I would basically think about the expected improvement from thinking through my decisions in more detail to be the scale of importance. Here, realizing that I could avoid potential harms by not accelerating AI capabilities could be a large win.
This seems to treat harms and benefits symmetrically. Where does the asymmetry enter? (maybe I am thinking of a different importance scale?)
This is interesting, if a bit blithe about the concept of “moral importance”. What does it mean? In particular, I think the potential upside and the potential downside are often asymmetric in a way that makes “importance” tricky. For example, most people’s donations might be low impact (upside or downside) and donations can have high upside but rarely high downside. So would you consider moral importance to be defined primarily by the potential upside? How does potential downside (e.g. from career choice if your career accelerates AI capabilities risk) factor in?
A note that this is my subjective perspective, not an objective perspective on morality.
Basically, if you’re concerned about everyday moral decisions, you are almost certainly too worried and should stop worrying.
Focus on the big decisions for a moral life, like career choices.
You are ultimately defined morally by a few big actions, not everyday choices.
This means quite a bit of research to find opportunities, and mostly disregard your intuition.
On downside risk, a few interventions are almost certainly very bad, and cutting out the most toxic incentives in your life is enough. Don’t fret about everyday evils.
Sorry, I don’t understand the trickiness in ’[...] in a way that makes “importance” tricky’
In my mind, I would basically think about the expected improvement from thinking through my decisions in more detail to be the scale of importance. Here, realizing that I could avoid potential harms by not accelerating AI capabilities could be a large win.
This seems to treat harms and benefits symmetrically. Where does the asymmetry enter? (maybe I am thinking of a different importance scale?)