My assessment is that actually the opposite is true.
The argument you presented appears excellent to me, and I’ve now changed my mind on this particular point.
Thanks. I don’t agree with your interpretation of the survey data. I’ll quote another sentence from the essay that made my statement on this more clear,
The majority of the population of Taiwan simply want to be left alone, as a sovereign nation—which they already are, in every practical sense.
The position “declare independence as soon as possible” is unpopular for an obvious reason that I explained in the post. Namely, if Taiwan made a formal declaration of independence, it would potentially trigger a Chinese invasion.
“Maintaining the status quo” is, for the most part, code for maintaining functional independence, which is popular, because as you said, “It means peace and prosperity, and it has been surprisingly stable over the last 70 years.” This is what I meant by saying the Taiwanese “want to be their own nation instead, indefinitely” in the sentence you quoted, because I was talking about what’s actually practically true, not just what’s true on paper.
I’ll note that if you add up the percentage of people who want to maintain the status quo indefinitely, and those who want to maintain the status quo but move towards independence, it sums to 52.4%. It goes up to 58.4% if you include people who want to declare independence as soon as possible.
I admit my wording sucked, but I think what I said basically matches the facts-on-the ground, if not the literal survey data you quoted, in the sense that there is almost no political will right now to reunify with China (at least until they meet some hypothetical conditions, which they probably won’t any time soon).
I like that you admit that your examples are cherry-picked. But I’m actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky’s successes?
While he’s not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn’t a complete track record, but it’s still a very important data-point. It’s a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn’t actually true, though I agree if you just mean pragmatically, most longtermists aren’t suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, “the view that impact on the far future is the most important feature of our actions today.” Longtermism is therefore completely agnostic about whether you’re a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It’s entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.
Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it’s still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,
The 0.57% of the US population that is imprisoned at any given time this year. (This might even be more analogous to battery cages than slavery).
The 25.78 million people who live under the totalitarian North Korean regime.
The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer’s and other forms of dementia. Not to mention their loved ones who are forced to witness this.
The 6% of the US population who experienced at least one major depressive episode in the last year.
The estimated half a million homeless population in the United States .
The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family
I want to understand the main claims of this post better. My understanding is that you have made the following chain of reasoning:
OpenPhil funded think tanks that advocated looser macroeconomic policy since 2014.
This had some non-trivial effect on actual macroeconomic policy in 2020-2022.
The result of this policy was to contribute to high inflation.
High inflation is bad for two reasons: (1) real wages decline, especially among the poor, (2) inflation causes populism, which may cause Democrats to lose the 2022 midterm elections.
Therefore, OpenPhil should not make similar grants in the future.
I’m with you on claims 1, 2, and 3. I’m not sure about 4 and 5. Let me focus on my confusions with claim 4.
In another comment, I pointed out that it wasn’t clear to me that inflation hurts low-wage workers by a substantial margin. Maybe the sources I cited there were poor, but it doesn’t seem like there’s a consensus about this issue to my (untrained) eyes.
The fact that prediction markets currently indicate that Republicans have an edge in the midterm elections is not surprising. FiveThirtyEight says, “One of the most ironclad rules in American politics is that the president’s party loses ground in midterm elections.” The only modern exception to this rule was the 2002 midterm election, in which Republicans gained seats because of 9/11.
If we look at ElectionBettingOdds, it appears that the main shock that pushed the markets in favor of a Republican win was the election last year. (see Senate, and House forecasts). It’s harder to see Republicans gaining due to inflation in the data (though I agree they probably did). EDIT: OK I think it’s more clear to me now that the spike in the House forecast in May 2021 was probably due to inflation concerns.
More voters have seen their real wages go down than up (mostly in the lower income brackets).
What is your source for this claim? By contrast, this article says,
Between roughly 56 and 57 percent of occupations, largely concentrated in the bottom half of the income distribution, are seeing real hourly wage increases.
And they show this chart,
Here’s another article that cites economists saying the same thing.
Here’s a quote from Wei Dai, speaking on Feburary 26th 2020,
Here’s another example, which has actually happened 3 times to me already:The truly ignorant don’t wear masks.Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.“Experts” counter-signal with “masks don’t do much”, “we should be evidence-based” and “WHO says ‘If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.’”I respond by citing actual evidence in the form of a meta-analysis: medical procedure masks combined with hand hygiene achieved RR of .73 while hand hygiene alone had a (not statistically significant) RR of .86.
Here’s another example, which has actually happened 3 times to me already:
The truly ignorant don’t wear masks.
Many people wear masks or encourage others to wear masks in part to signal their knowledge and conscientiousness.
“Experts” counter-signal with “masks don’t do much”, “we should be evidence-based” and “WHO says ‘If you are healthy, you only need to wear a mask if you are taking care of a person with suspected 2019-nCoV infection.’”
I respond by citing actual evidence in the form of a meta-analysis: medical procedure masks combined with hand hygiene achieved RR of .73 while hand hygiene alone had a (not statistically significant) RR of .86.
After over a month of dragging their feet, and a whole bunch of experts saying misleading things, the CDC finally recommended people wear masks on April 3rd 2020.
Thanks for the continued discussion.
If I’m understanding correctly the main point you’re making is that I probably shouldn’t have said this:There is little room for improvement here...
If I’m understanding correctly the main point you’re making is that I probably shouldn’t have said this:
There is little room for improvement here...
I think I’m making two points. The first point was, yeah, I think there is substantial room for improvement here. But the second point is necessary: analyzing the situation with Taiwan is crucial if we seek to effectively reduce nuclear risk.
I do not think it was wrong to focus on the trade war. It depends on your goals. If you wanted to promote quick, actionable and robust advice, it made sense. If you wanted to stare straight into the abyss, and solve the problem directly, it made a little less sense. Sometimes the first thing is what we need. But, as I’m glad to hear, you seem to agree with me that we also sometimes need to do the second thing.
My reason for focusing on the trade war though is because trade deescalation would have very few downsides and would probably be a substantial positive all on its own before even considering the potential positive effects it could have on relations with China and possibly nuclear risk.
I agree. I think we’re both on the same page about the merits of ending the trade war, as an issue by itself.
The optimal policy here is far from clear to me.
Right. From my perspective, this is what makes focusing on Taiwan precisely right thing to do in a high-level analysis.
My understanding of your point here is something like, “The US-Taiwan policy is a super complicated issue so I decided not to even touch it.” But, since the US-Taiwan policy is also the most important question regarding US-China relations, not talking about it is basically just avoiding the hard part of the issue. It’s going to be difficult to make any progress if we don’t do the hard work of actually addressing the central problem.
(Maybe this is an unfair analogy, but I find what you’re saying to be a bit similar to, “I have an essay due in 12 hours. It’s on an extremely fraught topic, and I’m unsure whether my thesis is sound, or whether the supporting arguments make any sense. So, rather than deeply reconsider the points I make in my essay, I’ll just focus on making sure the essay has the right formatting instead.” I can sympathize with this sort of procrastination emotionally, but the clock is still ticking.)
I agree it’s a significant issue that should be carefully considered, but it’s also an issue that I’m sure international relations experts have spilled huge amounts of ink over so I’m not sure if there are any clearly superior policy improvements available in this area.
I expect experts to have basically spilled a huge amount of ink about every policy regarding US-China relations, so I don’t see this as a uniquely asymmetric argument against thinking about Taiwan. Maybe your point is merely that these experts have not yet come to a conclusion, so it seems unlikely that you could come to a conclusion in the span of a short essay. This would be fair reply, but I have two brief heuristic thoughts on that,
Most international relations experts neither understand, nor are motivated by an EA mindset. To the extent that you buy EA philosophy, I think we are well-positioned to have interesting analyses on questions such as, “Is it worth risking nuclear war to save a vibrant democracy?” It’s not clear to me at all that moral philosophers have adequately responded to this question already, in the way EAs would find appealing.
I understand the mindset of “Don’t try to make progress on a topic that experts have thought about for decades and yet have gone nowhere.” That’s probably true for things like string theory and the Collatz conjecture. But, this is “philosophy with a deadline” to co-opt a phrase from Nick Bostrom. There’s a real chance that World War 3 is coming in the next few decades; so, we better look that possibility in the face, rather than turning away, and caring about something comparatively minor instead.
You mention ending the trade war as the main mechanism by which we could ease US-China tensions. I agree that this policy change seems especially tractable, but it does not appear to me to be an effective means of avoiding a global conflict. As Stefan Schubert pointed out, the tariffs appear to have a very modest effect on either the American or Chinese economy.
The elephant in the room, as you alluded to, is Taiwan. A Chinese invasion of Taiwan, and subsequent intervention by the United States, is plausibly the most likely trigger for World War 3 in the near-term future. You write that,
There is little room for improvement here, as China-Taiwan relations have a long history, and the US must walk a fine line between supporting Taiwan but also not signaling to Taiwan that US support will enable Taiwan to declare full independence, which could raise the likelihood of hostilities from China.
However, we can just as easily say that because the US position on Taiwan is ambiguous, there is much more room for improvement here. More specifically, since it’s unclear how and whether the US will intervene in a Chinese-Taiwan conflict, this indicates that US foreign policy is variable and easily subject to change.
In this situation, we can imagine that a mere change in attitude from the US president could be enough to dramatically influence the plausibility of a global conflict. For instance, suppose in the future, an anti-Taiwan president gets elected in America, and as a result, China decides to invade Taiwan, confident that the US will not respond. This election would then have profound implications for not only Taiwan, but the shape of global politics going forward.
We need to think very seriously about how the US should approach the China-Taiwan situation. Should we attempt to defend a vibrant democracy at the risk of starting a catastrophic nuclear war? This is a real question, with real stakes, and one where public opinion has a real chance of determining what ends up happening. In my opinion, the trade war is much less important.
One question I have is whether this is possible and how difficult it is?
I think it would be very difficult without human assistance. I don’t, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note “This AI will solve all your problems: poverty, disease, world hunger etc.”. We can’t verify that the AI will actually do any of those things, but enough people think that the aliens aren’t lying that we decide to try it.
After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.
I don’t find the scenario plausible. I think the grabby aliens model (cited in the post) provides a strong reason to doubt that there will be many so-called “quiet” aliens that hide their existence. Moreover, I think malicious grabby (or loud) aliens would not wait for messages before striking, which the Dark Forest theory relies critically on. See also section 15 in the grabby aliens paper, under the heading “SETI Implications”.
In general, I don’t think there are significant risks associated with messaging aliens (a thesis that other EAs have argued for, along these lines).
I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.
My interpretation of Peter Singer’s thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.
First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosophies even theoretically valid, in the sense of being both truth-seeking and based on compassion? I’d argue no. And the fact that these philosophies are invalid was partly why people committed crimes in their name.
Second, this argument proves too much. We could have presented an identical argument to a young Peter Singer in the context of animal farming. “But Peter, if people realize just how many billions of animals are suffering, then this philosophy could be used to justify genocide!” Yet my guess is that Singer would not have been persuaded by that argument at the time, for an obvious reason.
Any moral philosophy which permits ranking issues by importance (and are there any which do not?) can be used to justify atrocities. The important thing is whether the practitioners of the philosophy strongly disavow anti-social or violent actions themselves. And there’s abundant evidence that they do in this case, as I have not seen even a single prominent x-risk researcher publicly recommend that anyone commit violent acts of any kind.
I’m happy with more critiques of total utilitarianism here. :)
For what it’s worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.
I may have missed it, but I didn’t see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, “Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.”).
I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks.
I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people’s preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.
Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.
I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don’t find value in creating new beings just to satisfy their preferences.
The first fact is shared by all non-lexical varieties of consequentialism, so it doesn’t appear to be a unique critique of negative preference utilitarianism.
The second fact doesn’t seem counterintuitive to me, personally. When I try to visualize why other people find it counterintuitive, I end up imagining that it would be sad/shameful/disappointing if we never created a utopia. But under negative preference utilitarianism, existing preferences to create and live in a utopia are already taken into account. So, it’s not optimal to ignore these people’s wishes.
On the other hand, I find it unintuitive that we should build preferenceonium (homogeneous matter optimized to have very strong preferences that are immediately satisfied). So, this objection doesn’t move me by much.
A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.
I think if someone genuinely removed themselves of all desire then, yes, I think it would be acceptable to lower their well-being to zero (note that we should also take into account their preferences not to be exploited in such a manner). But this thought experiment seems hollow to me, because of the well-known difficulty of detaching oneself completely from material wants, or empathizing with those who have truly done so.
The force of the thought experiment seems to rest almost entirely on the intuition that the monks have not actually succeeded—as you say, they “merely take in the joys of life without having desires”. But if they really have no desires, then why are they taking joy in life? Indeed, why would they take any action whatsoever?