I agree something about influence is important. As a counterpoint, I think many manifestations of “having influence” don’t store well (e.g. the fact that at a given time, a relatively large number of EAs have an “influential role” (whatever that means exactly) is only weakly related to how many EAs will have an influential role in t+1 (say a generation later).
Wrt accumulation, influence also seems less straightforward to grow when you compare it to e.g. money (and to a lesser extent to knowledge) which, thanks to interest rates, accumulates at a certain rate basically for free (without you having to do anything) and fairly robustly. I’m not saying that influence is clearly a worse investment than money when it comes to future impact potenital, but that money is a pretty good and stable baseline that might not be as easy to beat as one might think at first sight. Also I think approaches of using “influence” to store and accumulate impact potential will vary a lot on these dimensions, so we’d probably want to talk about such approaches in the concrete rather than the abstract
> under your framework, community building is also an intervention for patient longtermism
+1 and also worth flagging that e.g. Philip Trammel explicitly says so too in his work on patient longtermism (though he clarifies that this is only true for specific types of community building)
nora
This made me think of the way David Deutsch talks about knoweldge creation—where knowledge manifests physically in e.g. the way a species is adapted to its niche. The process of natural selection that lead to this adapation is a process of “exploratiin” and “error correction” that accumulates knoweldge. That degree of adaptation is the physical manifestation of kowledge. DNA is an important substrate of this process—however, I expect that DNA won’t be the most fruitful level of abstraction at which to think about the patient longtermist question.
Still, to explore this framework a bit more …Re accumulation, one potential implication is that we might want to pay attention to the “error correction” mechanism that is essential to knoweldge accumulation. The scientific method is an example of this. We could try to improve the “machinery of science” that is based on this error correction logic, and we could try to apply this logic of error correction(more/better) to more areas beyond academia. Some examples here might be ways to make it easier to have constructive disagreements (eg. adverserial collaborations, the Letter community, a hypothetical wiki that is structured in a way that shows main disagreeing view poitns on a topic, …) or more experimentation/evaluation/updating mechanisms, in particualr in policy making. (Some areas, e.g. business or medicine, have figured out a bunch about how to do these sorts of things, but for reasons these insights are not necesarily being applied as widely as they could).
Moral progress
I largely agree with your assessment that and how automation puts a lot of pressure on the fate of democracy (although, as you acknowledge, there are ways automation could strengthen democracy, and the way this will cash out sure seems liek it’s subject to strong path dependency.)
When we compare pre-industrial times to post-industrial times, it is not only our economy and our arsenal of technologies that is different. Within these ~200-300 years, humanity has also undergone meaningful intellectual and moral progress. This includes things like coming to think that women and people of colour are full members of society, or spelling out values such as freedom, self-realization, etc. If automatisation will lead to power being concentrated in the hands of a small elite, this also means that the beliefs and values of this elite become more important.
Of course, if their moral ideals are in stark contrast with others, e.g. economic interests, we should expect they will just throw most of these ideals over board or engage in elaborate rationalizations to present they are still holding them up high. But if the conflict of interst remains relatively weak, I do think this migth be a factor that palys a role.
What plausible outside views are there? How much to rely on which?
Here is another possible outside view one could take. Under this view, the question of how societies govern themselves is subject to evolutionary dynamics. (You allude to this a bit in one of your footnotes, when talking about economic determinism.) Different societies adopt different approaches, and societies with better approaches are more successful and become more dominant. Less successful societies either cease to exist or adopt the better approaches by imitation. Based on this view, we can identify “evolutionary pressures” and know some things about where these pressures are likely to steer us in the future. (Obviously we still don’t know exactly where this development leads us, but the space of possible developments is in fact constrained by these co-evolutionary dynamics.)
What specifically might “fitness” look like here? Taking a perspective as roughly outlined in this paper, we could posit that in order for a species to grow ever larger in scale, it requires (what in the paper is called) information processing capacity. Democracy (or government/the policy making apparatus at large) can be viewed as essentially such an information processing technology, and thus adaptive/fitness enhancing. Given the size and complexity of present day societies, it does look like the largely top-down information processing technology of an authoritative regime would less adaptive.
One can argue that democracy is a “successful adaptation” and thus is likely stick around. Maybe this is true, but I think this argument is way harder to make than what I’ve offered above, and I’m not actually sure it stands. Reasons why this isn’t straightforward include that the evolutionary dynamics described above are not very pure (compared to “proper” Darwinian natural selection), and that the environmental conditions within which the process unfolds are changing drastically, which could for example mean that adaptations that were fitness enhancing in the past won’t be in the future.
The reason I do bring this argument up however is that I think it suggests that we shouldn’t pay much attention to the “regression to the means” type arguments. I agree this is a prior to use, but I think we know enough about the territory that we shouldn’t rely much on it.
(I don’t necessarily think you do (though I don’t know). This is to say, I can see how you might get to the 4 in 5 prediction without invoking a “regression to the means” type argument, but by solely looking at the arguments you have for example layed out in your section on automation.)
If democracy retreats, what will it be replaced by?
A lot of the time, people assume a natural dichotomy between democracy and authoritative regimes. While this is certainly a useful shorthand when looking at history, I think it is likely to be misleading when thinking about the future.
This “false dichotomy” between democracy and authoritative regimes often contrasts “my values and needs are adequately taken into account” (<> democracy) with “my values and needs basically don’t matter” (<>authoritative regimes). By putting these things into the same bucket, we might overlook ways in which these connections might come apart.
For example, I might not inherently care about whether I will be able to directly or indirectly choose my political leader, but I definitely care about how well my values and needs will be taken into account in this process that steers my society into alternative futures.
Relatedly, discussions about democracy are often just as much about “democratic values″ (e.g. liberty, equality, justice) as they are about “the process of choosing our own leaders”.
I’d be curious whether your prediction about whether democracy will still be around in one thousand years largely overlaps with your prediction about, say, “will an average person in a thousand years from now feel like their values and needs are adequately taken into account by whoever or whatever is making decisions about how their society is being governed?”. (Of course, other operationalizations might be interesting, too).
The latter is much harder to predict, and democracy as you defined it might be the correct way of approaching the latter question. That said, understanding more about how lieky they are to come apart, and if so how seems potentially interesting.
Thanks, I enjoyed reading this.
Here are a few thoughts; they aren’t meant as critiques of things you say, but simple thoughts triggered by, building on or attempting to complement your analysis.
Neat, thanks Max!
Some thoughts on Patient Longtermism
Patient Longtermism as a benchmark
Meta: I haven’t seen this framing spelt out in these terms and think it’s a useful way of integrating considerations raised by patient longtermism into one overall EA worldview.
The considerations elucidated by patient longtermism, namely that our resources can “go further” in the future, are important. There is an analogous here to Singer’s drowning child argument, which says that, all else equal, you shouldn’t have a preference over helping someone who is spatially close to you compared to someone who is spatially far away. In other words, when evaluating different altruistic actions, you should only consider their “impact potential” and not, for example, your geographical distance of the moral patient. In Singer’s case, inequalities in global levels of development mean that money can go further (i.e. have more altruistic impact) abroad. In the case of patient longtermism, interest rates being higher than the rate at which creating additional welfare becomes more expensive over time mean that money can go further in the future.
Personally, I feel generally very happy to defer judgement about what is best to do to future beings since knowledge and wisdom is likely to have increased by then. Because of that (and abstracting from some other complications, some of which I will touch on later), I feel happy to invest resources today in a way that has them accumulate over time such that, eventually, future beings have more resources at hand for doing good, according to their judgement of how to best do that.
This is why I think estimates based on considerations of patient longtermism can usefully function as a benchmark against which to compare present-day altruistic actions. [1]
(Of course, all of this is still abstracting away from a lot of real-world complexity, some of which are decision-relevant. Thus, a benchmark consideration as I’m suggesting it ought to be used considerately, more like one among many inputs that weigh in on one’s decision.)
[1] An early example of this might be Philip Trammell’s calculation (see “Discounting for Patient Philanthropists” or “80,000 Hours interview with Phillip Trammel”) that says that: if interest rates continue to be higher than the rate at which creating additional welfare becomes more expensive, in approximately 279 years, giving the invested money to rich people in the developed world would (still) create more welfare than if you were to give the initial amount of money to the world’s poorest today. (
I think the “so that they become more predictable [to the recommender algorithm]” is crucial in Russel’s argument. IF human preferences were malleable in this way, and IF recommender algorithms are strong enough to detect that malleability, then the pressures towards the behaviour that Russel suggests is strong and we have a lot of reasons to expect it. I think the answer to both IFs is likely to be yes.