This is great! Was trying to think through some of my own projects with this framework, and I realized I think there’s half of the equation missing, related to the memetic qualities of the tool.
1. How “symmetric” is the thing I’m trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?
2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.
3. How contained is the group with which I’m sharing? Outside of the memetic factors of the idea itself, is the person or group I’m sharing with it likely to spread it, or keep it contained.
(My opinions, not necessarily Convergence’s, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on “memetic downside risks”, which you might find interesting.
To more directly address your points: I’d say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesn’t highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/heuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, here’s a sketch of how I think BIP could capture your points:
1. If the idea you’re spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the “intelligence” or “power” of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would “help an actor make and execute plans that are aligned with the actor’s moral beliefs or values”, and power in relation to what would “help an actor execute its plans”. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/or power.
2. If an idea increases the intelligence or power of whoever receives it, it’s best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then it’s harder to target it, and it’s more likely you’ll also increase the intelligence or power of malevolent actors, which is risky/negative. This could explain why a more “memetically fit” idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if it’d be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they don’t care about the right consequences) or less intelligent (if they don’t foresee the consequences). This pushes against increasing those actors’ power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: I’ve now made some edits to this post (described in my reply to Carl Shulman’s comment) that might go a little way towards making this sort of thing more explicit.
This is great! Was trying to think through some of my own projects with this framework, and I realized I think there’s half of the equation missing, related to the memetic qualities of the tool.
1. How “symmetric” is the thing I’m trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?
2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.
3. How contained is the group with which I’m sharing? Outside of the memetic factors of the idea itself, is the person or group I’m sharing with it likely to spread it, or keep it contained.
(My opinions, not necessarily Convergence’s, as with most of my comments)
Glad to hear you liked the post :)
One thing your comment makes me think of is that we actually also wrote a post focused on “memetic downside risks”, which you might find interesting.
To more directly address your points: I’d say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesn’t highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/heuristic instead or in addition, even if BIP could capture the relevant considerations.
As an example, here’s a sketch of how I think BIP could capture your points:
1. If the idea you’re spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the “intelligence” or “power” of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would “help an actor make and execute plans that are aligned with the actor’s moral beliefs or values”, and power in relation to what would “help an actor execute its plans”. Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/or power.
2. If an idea increases the intelligence or power of whoever receives it, it’s best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then it’s harder to target it, and it’s more likely you’ll also increase the intelligence or power of malevolent actors, which is risky/negative. This could explain why a more “memetically fit” idea could be more risky to spread.
3. Similar to point 2. But with the addition of the observation that, if it’d be harmful to spread the idea, then actors who are more likely to spread the idea must presumably be less benevolent (if they don’t care about the right consequences) or less intelligent (if they don’t foresee the consequences). This pushes against increasing those actors’ power, and possibly against increasing their intelligence (depending on the specifics).
But all that being said, if I was considering an action that has its impacts primarily through the spread of information and ideas, I might focus more on concepts like memetic downside risks and information hazards, rather than the BIP framework. (Or I might use them together.)
Finally, I do think it could make sense for future work to create variations or extensions of this BIP framework which do more explicitly incorporate other considerations, or make it more useful for different types of decisions. And integrating the BIP framework with ideas from memetics could be one good way to do that.
EDIT: I’ve now made some edits to this post (described in my reply to Carl Shulman’s comment) that might go a little way towards making this sort of thing more explicit.