This Friday I’m again interviewing William MacAskill, this time just about his upcoming book ‘What We Owe The Future’, for what may become 80,000 Hours’ new audio intro to longtermism.
We’ve got 3 or so hours — what should I ask him?
Previous interviews:
Previous MCE projects like abolitionism or liberal projects like extending suffrage to non-landowning non-whitemales were fighting against the forcible removal of voice from people who had the ability to speak for themselves. Contemporary MCE projects like animals and future people do not share this property; I believe that animals cannot advocate for themselves, and the best proxy for future peoples’ political interests I can think of falls really short. In this light, does it make any sense at all to say that there’s a continuity of MCE activism across domains/problem areas?
I think it makes sense for, say, covid-era vaccine administrators to think of themselves as carrying on the legacy of the groups who put smallpox in the ground, but it may not make the same sense for longtermists to think of themselves as carrying on the legacy of slavery abolition just because both families of projects in some sense look like MCE.
Related, does classifying abolitionism as an MCE project downplay the agency of the slaves and over emphasize the actions of non-enslaved altruists/activists?
In other words, contemporary MCE/liberalism may actually be agents fighting for patients, whereas prior MCE/liberalism was agents who happen to have political recognition fighting with agents who happen to lack recognition. Does this distinction hold water with respect to your research?
Given that longtermism seems to have turned out to be a crucial consideration which a prior might have been considered counterintuitive or very absurd, should we be on the lookout for similarly important but wild & out-there options? How far should the EA community be willing to ride the train to crazy town (or, rather, how much variance should there be in the EA community for this? Normal or log-normal)?
For example one could consider things like multiverse-wide cooperation, acausal trade, options of creating infinite amounts of value and how to compare those (although I guess this is already been thought about in the area of infinite ethics, and try to actively search for them & figure out their implications (which doesn’t appear to have much prominence in EA at the moment). (Other examples listed here)
I remember a post by Tomasik (can’t find it right now) where he argues that the expected size of a new crucial consideration should be the the average of all past instances of such new instances, if we apply this here, the possible value seems high.
A bit late but it might be this post:
Thanks! Definitely not too late, I’m often looking for this particular cite.
What are the odds of extinction from nuclear, AI, bio, climate change, etc.?
His thoughts on the threat of “population collapse”?
How work on existential risk compares to work on animal welfare and global poverty in expected value (is it 50% better? 100x better?)
How does work on animal welfare and global poverty affect existential risk and the quality of the long-term future?
Where do Nick Bostrom, Toby Ord, Eliezer Yudkowsky, etc. go wrong that leads them to believe in substantially higher levels of AI risk than you?
What new E.A. projects would you like to see which haven’t been recommended by OpenPhil, FTX Future Fund, etc.
Do you believe in the perrenialist philosophy (the perspective in philosophy and spirituality that views all of the world’s religious traditions share a single, metaphysical truth or origin from which all esoteric and exoteric knowledge and doctrine has grown)? What would the discovery of absolute truth mean for the long-term future?
What problems need to be solved before we’ve created the “best possible world”? Or can we just rely on AGI to solve our problems?
Which values (besides MCE) are important for making sure the future goes well?
How can we “improve institutions to promote development” as is recommended as a potentially pressing longtermist issue by 80000 Hours?
What bad, non-extinction risks does AI pose?
Does E.A. underestimate the importance of becoming a space-faring species for ensuring the survival of humanity?
How can we prevent totalitarianism?
Where do you differ from SBF on E.A. priorities? How would you spend $1 billion?
For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.
How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I’m evaluating an intervention that’s valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?
Can Longtermism succeed without creating a benevolent stable authoritarianism given that it is unlikely that all humans will converge to the same values? Without such a hegemony or convergence of values, doesn’t it seem like conflicting interests among different humans will eventually lead to a catastrophic outcome?
I have an intuition that eliminating the severe suffering of say, 1 million people, might be more important than creating hundreds of trillions of happy people who would otherwise never exist. It’s not that I think there is no value in creating new happy people. It’s just that I think (a) the value of creating new happy people is qualitatively different than that of reducing severe suffering, and (b) sometimes, when two things are of qualitatively different value, no amount of 1 can add up to a certain amount of the other.
For example, consider two “intelligence machines” with qualitatively different kinds of intelligences. One does complex abstract reasoning and the other counts. I think it would be the case that no matter how much better you made the counting machine at counting, it would never surpass the intelligence of the abstract machine. Even though the counting machine gest more intelligent with each improvement, it never matches the intelligence of the abstract machine since the latter is of a qualitatively different and superior nature. Similarly, I value both deep romantic love and eating french fries, but I wouldn’t trade in a deep and fulfilling romance for any amount of french fries (even if I never got sick of fries). And I value human happiness and ant happiness, but wouldn’t trade in a million happy humans for any amount of happy ants.
In the same vein, I suspect that the value of reducing the severe suffering of millions is qualitatively different from and superior to the value of creating new happy people such that the latter can never match the former.
Do you think there’s anything to this intuition?
What are his thoughts on person-affecting views and their implications with respect to longtermism, including asymmetric ones, especially Teruji Thomas’s The Asymmetry, Uncertainty, and the Long Term?
How much does longtermism depend on expected value maximization, especially maximizing a utility function that’s additive over moral patients?
What are the best arguments for and against expected value maximization as normatively required?
What does he think about the vulnerability to Dutch books and money pumps and violating the sure-thing principle with expected value maximization with unbounded (including additive) utility functions? See, e.g. Paul Christiano’s comment with St. Petersburg lotteries.
What does he think about stochastic dominance as an alternative decision theory? Are there any other decision theories he likes?
What are his thoughts about the importance and implications of the possibility of aliens with respect to existential risks, including both extinction risks and s-risks? What about grabby aliens in particular? Should we expect to be replaced (or have our descendants replaced) with aliens eventually anyway? Should we worry about conflicts with aliens leading to s-risks?
If the correct normative view is impartial, is (Bayesian) expected value maximization too agent-centered, like ambiguity aversion with respect to the difference one makes (the latter is discussed in The case for strong longtermism)? Basically, a Bayesian uses their own single joint probability distribution, without good justification for choosing their own over many others. One alternative would be to use something like the maximality rule, where multiple probability distributions are all checked, without committing to a fairly arbitrarily chosen single one.
What is his position on EDT vs CDT and other alternatives? What are the main practical implications?
For moral uncertainty, in what (important) cases does he think intertheoretic comparisons are justified (and not arbitrary, i.e. alternative normalizations with vastly different implications aren’t as justifiable)?
What are his meta-ethical views? Is he a moral realist or antirealist? What kind? What are the main practical implications?
Bostrom’s vulnerable world hypothesis paper seems to suggest that existential security (xsec) isn’t going to happen, that we need a dual of the yudkowsky-moore law of mad science that raises our vigilance every timestep to keep up with the drops in minimal IQ it costs to destroy the world. A lifestyle of such constant vigilance seems leagues away from the goals that futurists tend to get excited about, like long reflections, spacefaring, or a comprehensive assault on suffering itself. Is xsec (in the sense of freedom from extinction being reliable and permanent enough to permit us to do common futurist goals) the kind of thing you would actually expect to see if you lived till the year 3000, 30000, or do you think the world would be in a state of constant vigilance (fear, paranoia) as a bargain for staying alive? What are the most compelling reasons to think that a strong form of xsec, one that doesn’t depend on some positive rate of heightening vigilance in perpetuity, is worth thinking about at all?
My comment on your previous post should have been saved for this one. I copy the questions below:
What do you think is the best approach to achieving existential security and how confident are you on this?
Which chapter/part of “What We Owe The Future” do you think most deviates from the EA mainstream?
In what way(s) would you change the focus of the EA longtermist community if you could?
Do you think more EAs should be choosing careers focused on boosting economic growth/tech progress?
Would you rather see marginal EA resources go towards reducing specific existential risks or boosting economic growth/tech progress?
The Future Fund website highlights immigration reform, slowing down demographic decline, and innovative educational experiments to empower young people with exceptional potential as effective ways to boost economic growth. How confident are you that these are the most effective ways to boost growth?
Where would you donate to most improve the long-term future?
Would you rather give to the Long-Term Future Fund or the Patient Philanthropy Fund?
Do you think you differ to most longtermist EAs on the “most influential century” debate and, if so, why?
How important do you think Moral Circle Expansion (MCE) is and what do you think are the most promising ways to achieve it?
What do you think is the best objection to longtermism/strong longtermism?
Fanaticism? Cluelessness? Arbitrariness?
How do you think most human lives today compare to the zero wellbeing level?
Relatedly when it comes to judging lives I have issues with just asking people people how good their lives are and think a hedonic experience sampling approach may be most appropriate