[80k and others claim that]: “The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change.”
1) Why do you assume this is ideological (in the sense of not being empirically grounded)?
Anecdotally, I was a protest-happy socialist as a teenager, but changed my mind after reading about the many historical failures, grasping the depth of the calculation problem, and so on. This at least felt like an ideological shift dependent on facts.
2) 80,000 Hours have been pushing congressional staffer and MP as top careers (literally #1 or #2 on the somewhat-deprecated quiz) for years. And improving institutions is on their front page of problem areas.
One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don’t think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.
Yudkowsky once officiated at a wedding. I find it quite beautiful.
Re: CSR. George Howlett started Effective Workplace Activism a couple of years ago, but it didn’t take off that much. Their handbook is useful.
I tried quite hard to change my large corporation’s charity selection process (maybe 50 hours’ work), but found the stubborn localism and fuzzies-orientation impossible to budge (for someone of my persuasiveness and seniority).
I haven’t read much deep ecology, but I model them as strict anti-interventionists rather than nature maximisers (or satisficers): isn’t it that they value whatever ‘the course of things without us’ would be?
(They certainly don’t mind particular deaths, or particular species extinctions.)
But even if I’m right about that, you’re surely right that some would bite the bullet when universal extinction was threatened. Do you know any people who accept that maintaining a ‘garden world’ is implied by valuing nature in itself?
I was sure that Kurzweil would be one, but actually he’s still on track. (“Proper Turing test passed by 2029”).
I wonder if the dismissive received view on him is because he states specific years (to make himself falsifiable), which people interpret as crankish overconfidence.
I really like this, particularly how the conceptual splits lead to the appropriate mitigations.
The best taxonomy of uncertainty I’ve ever seen is this great paper by some physicists reflecting on the Great Recession. It’s ordinal and gives a bit more granularity to the stats (“Opaque”) branch of your tree, and also has a (half-serious) capstone category for catching events beyond reason:
1. “Complete certainty”. You are in a Newtonian clockwork universe with no residuals, no observer effects, utterly stable parameters. So, given perfect information, you yield perfect predictions.
2. “Risk without uncertainty”. You know a probability distribution for an exhaustive set of outcomes. No statistical inference needed. This is life in a hypothetical honest casino, where the rules are transparent and always followed. This situation bears little resemblance to financial markets.
3. “Fully Reducible Uncertainty”. There is one probability distribution over a set of known outcomes, but parameters are unknown. Like an honest casino, but one in which the odds are not posted and must therefore be inferred from experience. In broader terms, fully reducible uncertainty describes a world in which a single model generates all outcomes, and this model is parameterized by a finite number of unknown parameters that do not change over time and which can be estimated with an arbitrary degree of precision given enough data. As sample size increases, classical inference brings this down to level 2.
4. “Partially Reducible Uncertainty”. The distribution generating the data changes too frequently or is too complex to be estimated, or it consists in several nonperiodic regimes. Statistical inference cannot ever reduce this uncertainty to risk. Four sources:(1) stochastic or time-varying parameters that vary too frequently to be estimated accurately; (2) nonlinearities too complex to be captured by existing models, techniques, and datasets; (3) non-stationarities and non-ergodicities that render useless the Law of Large Numbers, Central Limit Theorem, and other methods of statistical inference and approximation; and (4) the dependence on relevant but unknown and unknowable conditioning information...
5. “Irreducible uncertainty”. Ignorance so complete that it cannot be reduced using data: no distribution, so no success in risk management. Such uncertainty is beyond the reach of probabilistic reasoning, statistical inference, and any meaningful quantification. This type of uncertainty is the domain of philosophers and religious leaders, who focus on not only the unknown, but the unknowable.
No idea, sorry. I know CSER have held at least one workshop about Trump and populism, so maybe try Julius Weitzdoerfer:
[Trump] will make people aware that they have to think about risks, but, in a world where scientific evidence isn’t taken into account, all the threats we face will increase.
Fair. But without tech there would be much less to fight for. So it’s multiplicative.
Good call. I’d add organised labour if I was doing a personal accounting.
We could probably have had trans rights without Burou’s surgeries and HRT but they surely had some impact, bringing it forward(?)
No, I don’t have a strong opinion either way. I suspect they’re ‘wickedly’ entangled. Just pushing back against the assumption that historical views, or policy views, can be assumed to be unempirical.
Is your claim (that soc > tech) retrospective only? I can think of plenty of speculated technologies that swamp all past social effects (e.g. super-longevity, brain emulation, suffering abolitionism) and perhaps all future social effects.
This comment is a wonderful crystallisation of the ‘defensive statistics’ of Andrew Gelman, James Heathers and other great epistemic policemen. Thanks!
Thanks for this. I’m not very familiar with the context, but let me see if I understand. (In a first for me, I’m not sure whether to ask you to cite more scripture or add more formal argument.) Let’s assume a Christian god, and call a rational consequence-counting believer an Optimising Christian.
Your overall point is that there are (or might be) two disjoint ethics, one for us and one for God, and that ours has a smaller scope, falling short of long-termism, for obvious reasons. Is this an orthodox view?
1. “The Bible says not to worry, since you can trust God to make things right. Planning is not worrying though. This puts a cap on the intensity of our longterm concern.”
2. “Humans are obviously not as good at longtermism as God, so we can leave it to Him.”
3. “Classical theism: at least parts of the future are fixed, and God promised us no (more) existential catastrophes. (Via flooding.)”
4. “Optimising Christians don’t need to bring (maximally many) people into existence: it’s supererogatory.” But large parts of Christianity take population increase very seriously as an obligation (based on e.g. Genesis 1:28 or Psalm 127). Do you know of doctrine that Christian universalism stops at present people?
5. “Optimising Christians only need to ‘satisfice’ their fellows, raising them out of subsistence. Positive consequentialism is for God.” This idea has a similar structure to negative utilitarianism, a moral system with an unusual number of philosophical difficulties. Why do bliss or happiness have no / insufficient moral weight? And, theologically: does orthodoxy say we don’t need to make others (very) happy?
If I understand you, in your points (1) through (4) you appeal to a notion of God’s agency outside of human action or natural laws. (So miracles only?) But a better theology of causation wouldn’t rely on miracles, instead viewing the whole causal history of the universe as constituting God’s agency. That interpretation, which at least doesn’t contradict physics, would keep optimising Christians on the hook for x-risk.
Many of your points are appropriately hedged—e.g. “it might also be God’s job”—but this makes it difficult to read off actions from the claims. (You also appeal to a qualitative kind of Bayesian belief updating, e.g. “significant but not conclusive reason”.) Are you familiar with the parliamentary model of ethics? It helps us act even while holding nuanced/confused views—e.g. for the causation question I raised above, each agent could place their own subjective probabilities on occasionalism, fatalism, hands-off theology and so on, and then work out what the decision should be. This kind of analysis could move your post from food-for-thought into a tool for moving through ancient debates and imponderables.
People probably won’t give those examples here, for civility reasons. The SSC post linked above covers some practices Greg probably means, using historical examples.
Example of a practical benefit from taking the intentional stance: this (n=116) study of teaching programming by personalising the editor:
what “confidence level” means
Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. I’m happy to use a conventional measure, if there’s a convention on here.
Would also invite people who disagree to comment.
something like “extinction is less than 1% likely, not because...”
Interesting. This neatly sidesteps Ord’s argument (about low extinction probability implying proportionally higher expected value) which I just added, above.
Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:
I’m much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what we’re doing is helping..?
I take this very seriously; it’s why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that it’s equivalent to risk aversion (the bad kind) somehow. Not sure.
There’s a small, new literature analysing the subset of nonexistence I think you mean, under the name “impossible worlds”. (The authors have no moral or meta-ethical aims.) It might help to use their typology of impossible situations: Impossible Ways vs Logic Violators vs Classical Logic Violators vs Contradiction-Realizers.
To avoid confusion, consider ‘necessarily-nonexistent’ or ‘impossible moral patients’ or some new coinage like that, instead of just ‘nonexistent beings’ otherwise people will think you’re talking about the old Nonidentity Problem.
I think you’ll struggle to make progress, because the intuition that only possible people can be moral patients is so strong, stronger than the one about electrons or microbial life and so on. In the absence of positive reasons (rather than just speculative caution), the project can be expected to move attention away from moral patients to nonpatients—at least, your attention.
Meta: If you don’t want to edit out the thirteen paragraphs of preamble, maybe add a biggish summary paragraph at the top; the first time I read it (skimming, but still) I couldn’t find the proposition.
Re: 2. Here’s a few.
I mean that the end of the world isn’t a bad outcome to someone who only values the absence of suffering, and who is perfectly indifferent between all ‘positive’ states. (This is Ord’s definition of absolute NU, so I don’t think I’m straw-manning that kind.) And if something isn’t bad (and doesn’t prevent any good), a utilitarian ‘doesn’t have to work on it’ in the sense that there’s no moral imperative to.
(1) That makes sense. But there’s an escalation problem: worse risk is better to ANU (see below).
(2) One dreadful idea is that self-replicators would do the anti-suffering work, obviating the need for sentient guardians, but I see what you’re saying. Again though, this uncertainty about moral patients licences ANU work on x-risks to humans… but only while moving the degenerate ‘solution’ upward, to valuing risks that destroy more classes of candidate moral patients. At the limit, the end of the entire universe is indisputably optimal to an ANU. So you’re right about Earth x-risks (which is mostly all people talk about) but not for really farout scifi ones, which ANU seems to value.
Actually this degenerate motion might change matters practically: it seems improbable that it’d be harder to remove suffering with biotechnology than to destroy everything. Up to you if you’re willing to bite the bullet on the remaining theoretical repugnance.
(To clarify, I think basically no negative utilitarian wants this, including those who identify with absolute NU. But that suggests that their utility function is more complex than they let on. You hint at this when you mention valuing an ‘infinite game’ of suffering alleviation. This doesn’t make sense on the ANU account, because each iteration can only break even (not increase suffering) or lose (increase suffering).)
Most ethical views have degenerate points in them, but valuing the greatest destruction equal to the greatest hedonic triumph is unusually repugnant, even among repugnant conclusions.
I don’t think instrumentally valuing positive states helps with the x-risk question, because they get trumped by a sufficiently large amount of terminal value, again e.g. the end of all things.
(I’m not making claims about other kinds of NU.)