(My opinions, not necessarily Convergenceâs, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on âmemetic downside risksâ, which you might find interesting.
To more directly address your points: Iâd say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesnât highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/âheuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, hereâs a sketch of how I think BIP could capture your points:
1. If the idea youâre spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the âintelligenceâ or âpowerâ of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would âhelp an actor make and execute plans that are aligned with the actorâs moral beliefs or valuesâ, and power in relation to what would âhelp an actor execute its plansâ. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/âor power.
2. If an idea increases the intelligence or power of whoever receives it, itâs best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then itâs harder to target it, and itâs more likely youâll also increase the intelligence or power of malevolent actors, which is risky/ânegative. This could explain why a more âmemetically fitâ idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if itâd be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they donât care about the right consequences) or less intelligent (if they donât foresee the consequences). This pushes against increasing those actorsâ power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: Iâve now made some edits to this post (described in my reply to Carl Shulmanâs comment) that might go a little way towards making this sort of thing more explicit.
(My opinions, not necessarily Convergenceâs, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on âmemetic downside risksâ, which you might find interesting.
To more directly address your points: Iâd say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesnât highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/âheuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, hereâs a sketch of how I think BIP could capture your points:
1. If the idea youâre spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the âintelligenceâ or âpowerâ of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would âhelp an actor make and execute plans that are aligned with the actorâs moral beliefs or valuesâ, and power in relation to what would âhelp an actor execute its plansâ. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/âor power.
2. If an idea increases the intelligence or power of whoever receives it, itâs best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then itâs harder to target it, and itâs more likely youâll also increase the intelligence or power of malevolent actors, which is risky/ânegative. This could explain why a more âmemetically fitâ idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if itâd be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they donât care about the right consequences) or less intelligent (if they donât foresee the consequences). This pushes against increasing those actorsâ power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: Iâve now made some edits to this post (described in my reply to Carl Shulmanâs comment) that might go a little way towards making this sort of thing more explicit.