One question I have after reading is the tractability of increasing benevolence, intelligence, and power. I get the sense that increasing benevolence might be the least tractable (though 80,000 Hours seems to think it might still be worth pursuing), though I’m less sure about how intelligence and power compare. (I’m inclined to think intelligence is somewhat more tractable, but I’m highly uncertain about that.)
I think that this is a really important question. Relatedly, I’d suggest that the BIP framework is best used in combination with the ITN framework/heuristic. In particular, I’d want to always ask not just “What does BIP say about how valuable this change in actors’ traits would be?”, but also “How tractable and neglected is causing that change?”
But I think that, when asking that sort of question, I’d want to break things down a bit more than just into the three categories of increasing benevolence vs intelligence vs power.
For a start, increasing intelligence and power could sometimes be negative (or at least, that’s what this post argues). So we should probably ask about how tractable and neglected good benevolence,intelligence, or power increases are. In the case of intelligence and power, this might require only increasing specific types of intelligence and power, or increasing the intelligence and power of only certain actors. This might reduce the tractability of good intelligence/power increases, potentially making them seem less tractable than benevolence increases, even if just increasing someone’s intelligence/power in some way is more tractable.
And then there’s also the fact that each of those three factors has many different sub-components, and I’d guess that there’d be big differences in the tractability and neglectedness of increasing each sub-component.
For example, it seems like work to increase how empathetic and peace-loving people are is far less neglected than work to increase how much people care about the welfare of beings in the long-term future. For another example, I’d guess that it’s easier to (a) teach someone a bunch of specific facts that are useful for thinking about what the biggest existential risks are and where they should donate if they want to reduce existential risks, than to (b) make someone better at “critical thinking” in a general sense.
So perhaps one factor will be “on average” easier to increase than another factor, but there’ll be sub-components of the former factor that are harder to increase than sub-components of the latter factor.
But that’s how I’d think about this sort of question. Actually answering this sort of question would require more detailed and empirical work. I’m guessing a lot of that work hasn’t been done, and a lot of it has been done but hasn’t been compiled neatly or brought from academia into EA. I’d be excited to see people fill those gaps!
I found this post really interesting—thank you!
One question I have after reading is the tractability of increasing benevolence, intelligence, and power. I get the sense that increasing benevolence might be the least tractable (though 80,000 Hours seems to think it might still be worth pursuing), though I’m less sure about how intelligence and power compare. (I’m inclined to think intelligence is somewhat more tractable, but I’m highly uncertain about that.)
I think that this is a really important question. Relatedly, I’d suggest that the BIP framework is best used in combination with the ITN framework/heuristic. In particular, I’d want to always ask not just “What does BIP say about how valuable this change in actors’ traits would be?”, but also “How tractable and neglected is causing that change?”
But I think that, when asking that sort of question, I’d want to break things down a bit more than just into the three categories of increasing benevolence vs intelligence vs power.
For a start, increasing intelligence and power could sometimes be negative (or at least, that’s what this post argues). So we should probably ask about how tractable and neglected good benevolence, intelligence, or power increases are. In the case of intelligence and power, this might require only increasing specific types of intelligence and power, or increasing the intelligence and power of only certain actors. This might reduce the tractability of good intelligence/power increases, potentially making them seem less tractable than benevolence increases, even if just increasing someone’s intelligence/power in some way is more tractable.
And then there’s also the fact that each of those three factors has many different sub-components, and I’d guess that there’d be big differences in the tractability and neglectedness of increasing each sub-component.
For example, it seems like work to increase how empathetic and peace-loving people are is far less neglected than work to increase how much people care about the welfare of beings in the long-term future. For another example, I’d guess that it’s easier to (a) teach someone a bunch of specific facts that are useful for thinking about what the biggest existential risks are and where they should donate if they want to reduce existential risks, than to (b) make someone better at “critical thinking” in a general sense.
So perhaps one factor will be “on average” easier to increase than another factor, but there’ll be sub-components of the former factor that are harder to increase than sub-components of the latter factor.
But that’s how I’d think about this sort of question. Actually answering this sort of question would require more detailed and empirical work. I’m guessing a lot of that work hasn’t been done, and a lot of it has been done but hasn’t been compiled neatly or brought from academia into EA. I’d be excited to see people fill those gaps!