Epistemic status: this is a model I’ve been playing around with that I think captures some important dynamics, but hasn’t had that much attention going into how it lines up with messy empirics.
In memetic selection, there’s an evolutionary benefit to belief systems which include a directive to spread that system of beliefs further. This can attract others to the belief system and support growth—at least for a while, until the easy growth is taken, or the belief system mutates.
I want to describe them as “explicit attractors” in meme-space in the sense of attractors in dynamical systems; although I’m using this as a slightly imprecise metaphor:
We can’t really consider one person’s beliefs as a dynamical system, because there is too much dependence upon others’ beliefs
Indeed it is this dependence upon others’ beliefs which provides most of what I wanting to refer to when I say “attractors”
If we consider the space of all global beliefs, there could be poles where everyone buys into the same belief system
If one of these was an attractor, it would follow that if we started in the right basin we would converge towards this global agreement
I want to be able to refer to a weaker form of “attractor” where being in some proximity to one of these poles exerts pressure towards the pole—but admit the possibility that the trajectory gets derailed or the pressure is eventually counteracted without giving up on calling the thing an attractor
I think for some explicit attractors, the directive to spread the beliefs further is very natural (perhaps could be derived as a consequence of other parts of the beliefs); for others it might be more arbitrary or bolted-on feeling. I guess we should expect cases where it’s relatively natural to be a bit more robust (hence perhaps memetically successful): they can’t easily lose the attractor property. This is an instance of a self-correction mechanism, which helps to prevent drift.
I think we see another kind of self-correction mechanism in the belief system of science. It provides tools for recognising truth and discarding falsehood, as well as cultural impetus to do so; this leads not just to the propagation of existing scientific beliefs, but to the systematic upgrading of those beliefs; this isn’t drift, but going deeper into the well of truth.
It seems to me like at a certain point of sophistication-of-thinking, good altruistic decision-making has a very natural version of the explicit attractor property: when there’s enough self-awareness, the decision-makers will realise that more good altruistic decision-making would be a very good thing on altruistic grounds, so it becomes valuable to spread that perspective. And at a certain point of truth-tracking, it will realise that having true beliefs is important in the pursuit of the good, and also strive to have more true beliefs about which decisions lead to good outcomes. This gives it something like the deep self-correction/improvement mechanism that science has.
I think that this is a very good place to be in memetic fundamentals. There’s a lot of texture about what ultimately makes a good meme, what people find attractive, memorable, etc., and important details that to be honed there; but it’s certainly helpful if your idea set can come with natural drives towards self-propagation and self-improvement.
I believe that this should make us optimistic about the future potential of the idea set around effective altruism: not necessarily under that brand, and not necessarily with particular current beliefs (as they may be proven wrong), but the fundamentals of seeking to help others, and doing this by seeking truer beliefs and building a larger coalition of people bought into the same basic goals. (Indeed I started this train of thought by wondering what is the kernel of effective altruism?, in the sense of looking for a minimal memeplex which would be self-propagating and self-improving, and eventually capture the goods that EA is achieving.)
So what? I think this gives pretty strong reason to favour the spread of truth-seeking self-aware altruistic decision-making (whether or not it’s attached to the “EA” brand). In particular it gives some reason to make sure that we’re keeping all of those elements: altruistic, truth-seeking, and self-aware (and not drop one for the sake of expedience).
It might seem like it becomes particularly important to know when things are on the right side of the boundary such that they’re just self-aware and just truth-seeking enough to eventually end up descending into the well (on the green side of the red boundary in my sketch below).
I think there’s something to that, but it’s unclear how important the precise location of the boundary is (at least after we have enough people solidly within the basin). Pulling people deeper in could help to accelerate their useful work (e.g. suppose you need to undertake a certain amount of successful truth-seeking before you can make your best altruistic decisions, as denoted by the purple region in the sketch). And if there is a substantial amount of work from people within the basin aiming to bring others in, then moving people in the blue zone just a bit closer to the boundary could be helpful for their subsequent passage over it, even if they don’t get there now. (It also seems quite possible that people closer to being in the basin will tend to do more useful work than people further from it, although I’m not sure about that.)
Good altruistic decision-making as a deep basin of attraction in meme-space
Epistemic status: this is a model I’ve been playing around with that I think captures some important dynamics, but hasn’t had that much attention going into how it lines up with messy empirics.
In memetic selection, there’s an evolutionary benefit to belief systems which include a directive to spread that system of beliefs further. This can attract others to the belief system and support growth—at least for a while, until the easy growth is taken, or the belief system mutates.
I want to describe them as “explicit attractors” in meme-space in the sense of attractors in dynamical systems; although I’m using this as a slightly imprecise metaphor:
We can’t really consider one person’s beliefs as a dynamical system, because there is too much dependence upon others’ beliefs
Indeed it is this dependence upon others’ beliefs which provides most of what I wanting to refer to when I say “attractors”
If we consider the space of all global beliefs, there could be poles where everyone buys into the same belief system
If one of these was an attractor, it would follow that if we started in the right basin we would converge towards this global agreement
I want to be able to refer to a weaker form of “attractor” where being in some proximity to one of these poles exerts pressure towards the pole—but admit the possibility that the trajectory gets derailed or the pressure is eventually counteracted without giving up on calling the thing an attractor
I think for some explicit attractors, the directive to spread the beliefs further is very natural (perhaps could be derived as a consequence of other parts of the beliefs); for others it might be more arbitrary or bolted-on feeling. I guess we should expect cases where it’s relatively natural to be a bit more robust (hence perhaps memetically successful): they can’t easily lose the attractor property. This is an instance of a self-correction mechanism, which helps to prevent drift.
I think we see another kind of self-correction mechanism in the belief system of science. It provides tools for recognising truth and discarding falsehood, as well as cultural impetus to do so; this leads not just to the propagation of existing scientific beliefs, but to the systematic upgrading of those beliefs; this isn’t drift, but going deeper into the well of truth.
It seems to me like at a certain point of sophistication-of-thinking, good altruistic decision-making has a very natural version of the explicit attractor property: when there’s enough self-awareness, the decision-makers will realise that more good altruistic decision-making would be a very good thing on altruistic grounds, so it becomes valuable to spread that perspective. And at a certain point of truth-tracking, it will realise that having true beliefs is important in the pursuit of the good, and also strive to have more true beliefs about which decisions lead to good outcomes. This gives it something like the deep self-correction/improvement mechanism that science has.
I think that this is a very good place to be in memetic fundamentals. There’s a lot of texture about what ultimately makes a good meme, what people find attractive, memorable, etc., and important details that to be honed there; but it’s certainly helpful if your idea set can come with natural drives towards self-propagation and self-improvement.
I believe that this should make us optimistic about the future potential of the idea set around effective altruism: not necessarily under that brand, and not necessarily with particular current beliefs (as they may be proven wrong), but the fundamentals of seeking to help others, and doing this by seeking truer beliefs and building a larger coalition of people bought into the same basic goals. (Indeed I started this train of thought by wondering what is the kernel of effective altruism?, in the sense of looking for a minimal memeplex which would be self-propagating and self-improving, and eventually capture the goods that EA is achieving.)
So what? I think this gives pretty strong reason to favour the spread of truth-seeking self-aware altruistic decision-making (whether or not it’s attached to the “EA” brand). In particular it gives some reason to make sure that we’re keeping all of those elements: altruistic, truth-seeking, and self-aware (and not drop one for the sake of expedience).
It might seem like it becomes particularly important to know when things are on the right side of the boundary such that they’re just self-aware and just truth-seeking enough to eventually end up descending into the well (on the green side of the red boundary in my sketch below).
I think there’s something to that, but it’s unclear how important the precise location of the boundary is (at least after we have enough people solidly within the basin). Pulling people deeper in could help to accelerate their useful work (e.g. suppose you need to undertake a certain amount of successful truth-seeking before you can make your best altruistic decisions, as denoted by the purple region in the sketch). And if there is a substantial amount of work from people within the basin aiming to bring others in, then moving people in the blue zone just a bit closer to the boundary could be helpful for their subsequent passage over it, even if they don’t get there now. (It also seems quite possible that people closer to being in the basin will tend to do more useful work than people further from it, although I’m not sure about that.)