(My opinions, not necessarily Convergenceās, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on āmemetic downside risksā, which you might find interesting.
To more directly address your points: Iād say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesnāt highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/āheuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, hereās a sketch of how I think BIP could capture your points:
1. If the idea youāre spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the āintelligenceā or āpowerā of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would āhelp an actor make and execute plans that are aligned with the actorās moral beliefs or valuesā, and power in relation to what would āhelp an actor execute its plansā. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/āor power.
2. If an idea increases the intelligence or power of whoever receives it, itās best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then itās harder to target it, and itās more likely youāll also increase the intelligence or power of malevolent actors, which is risky/ānegative. This could explain why a more āmemetically fitā idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if itād be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they donāt care about the right consequences) or less intelligent (if they donāt foresee the consequences). This pushes against increasing those actorsā power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: Iāve now made some edits to this post (described in my reply to Carl Shulmanās comment) that might go a little way towards making this sort of thing more explicit.
(My opinions, not necessarily Convergenceās, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on āmemetic downside risksā, which you might find interesting.
To more directly address your points: Iād say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesnāt highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/āheuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, hereās a sketch of how I think BIP could capture your points:
1. If the idea youāre spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the āintelligenceā or āpowerā of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would āhelp an actor make and execute plans that are aligned with the actorās moral beliefs or valuesā, and power in relation to what would āhelp an actor execute its plansā. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/āor power.
2. If an idea increases the intelligence or power of whoever receives it, itās best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then itās harder to target it, and itās more likely youāll also increase the intelligence or power of malevolent actors, which is risky/ānegative. This could explain why a more āmemetically fitā idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if itād be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they donāt care about the right consequences) or less intelligent (if they donāt foresee the consequences). This pushes against increasing those actorsā power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: Iāve now made some edits to this post (described in my reply to Carl Shulmanās comment) that might go a little way towards making this sort of thing more explicit.