I think that this is a really important question. Relatedly, Iād suggest that the BIP framework is best used in combination with the ITN framework/āheuristic. In particular, Iād want to always ask not just āWhat does BIP say about how valuable this change in actorsā traits would be?ā, but also āHow tractable and neglected is causing that change?ā
But I think that, when asking that sort of question, Iād want to break things down a bit more than just into the three categories of increasing benevolence vs intelligence vs power.
For a start, increasing intelligence and power could sometimes be negative (or at least, thatās what this post argues). So we should probably ask about how tractable and neglected good benevolence,intelligence, or power increases are. In the case of intelligence and power, this might require only increasing specific types of intelligence and power, or increasing the intelligence and power of only certain actors. This might reduce the tractability of good intelligence/āpower increases, potentially making them seem less tractable than benevolence increases, even if just increasing someoneās intelligence/āpower in some way is more tractable.
And then thereās also the fact that each of those three factors has many different sub-components, and Iād guess that thereād be big differences in the tractability and neglectedness of increasing each sub-component.
For example, it seems like work to increase how empathetic and peace-loving people are is far less neglected than work to increase how much people care about the welfare of beings in the long-term future. For another example, Iād guess that itās easier to (a) teach someone a bunch of specific facts that are useful for thinking about what the biggest existential risks are and where they should donate if they want to reduce existential risks, than to (b) make someone better at ācritical thinkingā in a general sense.
So perhaps one factor will be āon averageā easier to increase than another factor, but thereāll be sub-components of the former factor that are harder to increase than sub-components of the latter factor.
But thatās how Iād think about this sort of question. Actually answering this sort of question would require more detailed and empirical work. Iām guessing a lot of that work hasnāt been done, and a lot of it has been done but hasnāt been compiled neatly or brought from academia into EA. Iād be excited to see people fill those gaps!
I think that this is a really important question. Relatedly, Iād suggest that the BIP framework is best used in combination with the ITN framework/āheuristic. In particular, Iād want to always ask not just āWhat does BIP say about how valuable this change in actorsā traits would be?ā, but also āHow tractable and neglected is causing that change?ā
But I think that, when asking that sort of question, Iād want to break things down a bit more than just into the three categories of increasing benevolence vs intelligence vs power.
For a start, increasing intelligence and power could sometimes be negative (or at least, thatās what this post argues). So we should probably ask about how tractable and neglected good benevolence, intelligence, or power increases are. In the case of intelligence and power, this might require only increasing specific types of intelligence and power, or increasing the intelligence and power of only certain actors. This might reduce the tractability of good intelligence/āpower increases, potentially making them seem less tractable than benevolence increases, even if just increasing someoneās intelligence/āpower in some way is more tractable.
And then thereās also the fact that each of those three factors has many different sub-components, and Iād guess that thereād be big differences in the tractability and neglectedness of increasing each sub-component.
For example, it seems like work to increase how empathetic and peace-loving people are is far less neglected than work to increase how much people care about the welfare of beings in the long-term future. For another example, Iād guess that itās easier to (a) teach someone a bunch of specific facts that are useful for thinking about what the biggest existential risks are and where they should donate if they want to reduce existential risks, than to (b) make someone better at ācritical thinkingā in a general sense.
So perhaps one factor will be āon averageā easier to increase than another factor, but thereāll be sub-components of the former factor that are harder to increase than sub-components of the latter factor.
But thatās how Iād think about this sort of question. Actually answering this sort of question would require more detailed and empirical work. Iām guessing a lot of that work hasnāt been done, and a lot of it has been done but hasnāt been compiled neatly or brought from academia into EA. Iād be excited to see people fill those gaps!