Hey! Can’t respond most of your points now unfortunately, but just a few quick things :) (I’m working on a followup piece at the moment and will try to respond to some of your criticisms there) My central point is the ‘inconsequential in the grand scheme of things’ one you highlight here. This is why I end the essay with this quote:> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness.
The “undefined” bit also “proves too much”; it basically says we can’t predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy
Just wanted to flag that I responded to the ‘proving too much’ concern here: Proving Too Much
Very balanced assessment! Nicely done :)
Oops sorry haha neither did I! “this” just meant low-engagement, not your excellent advice about title choice. Updated :)
Hehe taking this as a sign I’m overstaying my welcome. Will finish the last post of the series though and move on :)
You’re correct, in practice you wouldn’t—that’s the ‘instrumentalist’ point made in the latter half of the post
Both actually! See section 6 in Making Ado Without Expectations—unmeasurable sets are one kind of expectation gap (6.2.1) and ‘single-hit’ infinities are another (6.1.2)
Worth highlighting the passage that the “mere ripples” in the title refers to for those skimming the comments:
Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS” Bostrom writes that these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. Mere ripples! That’s what World War II—including the forced sterilizations mentioned above, the Holocaust that killed 6 million Jews, and the death of some 40 million civilians—is on the Bostromian view. This may sound extremely callous, but there are far more egregious claims of the sort. For example, Bostrom argues that the tiniest reductions in existential risk are morally equivalent to the lives of billions and billions of actual human beings. To illustrate the idea, consider the following forced-choice scenario:Bostrom’s altruist: Imagine that you’re sitting in front of two red buttons. If you push the first button, 1 billion living, breathing, actual people will not be electrocuted to death. If you push the second button, you will reduce the probability of an existential catastrophe by a teeny-tiny, barely noticeable, almost negligible amount. Which button should you push? For Bostrom, the answer is absolutely obvious: you should push the second button! The issue isn’t even close to debatable. As Bostrom writes in 2013, even if there is “a mere 1 per cent chance” that 10^54 conscious beings living in computer simulations come to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.” So, take a billion human lives, multiply it by 100 billion, and what you get is the moral equivalent of reducing existential risk on the assumption that there is a “one billionth of one billionth of one percentage point” that we run vast simulations in which 10^54 happy people reside. This means that, on Bostrom’s view, you would be a grotesque moral monster not to push the second button. Sacrifice those people! Think of all the value that would be lost if you don’t!
Nice yeah Ben and I will be there!
What is your probability distribution across the size of the future population, provided there is not an existential catastrophe?
Do you for example think there is a more than 50% chance that it is greater than 10 billion?
I don’t have a probability distribution across the size of the future population. That said, I’m happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean “likely”. In that case, sure, I think it’s likely that the population will exceed 10 billion. Credences shouldn’t be taken any more seriously than that—epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely box.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
I don’t consider human extermination by AI to be a ‘current problem’ - I think that’s where the disagreement lies. (See my blogpost for further comments on this point)
(as far as I can tell their entire point is that you can always do an expected value calculation and “ignore all the effects contained in the first 100” years)
Yes, exactly. One can always find some expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of “falsifiability” or “irrefutability” or whatever—the word choice doesn’t really matter here. What matters is that this is a very dangerous game to be playing.
Yikes… now I’m even more worried … :|
Firstly, you and vadmas seem to assume number 2 is the case.
Oops nope the exact opposite! Couldn’t possibly agree more strongly with
Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well
Perfect, love it, spot on. I’d be 100% on board with longtermism if this is what it’s about—hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)
Do you in fact think that knowledge creation has strong intrinsic value? I, and I suspect most EAs, only think knowledge creation is instrumentally valuable.
Well, both. I do think it’s intrinsically valuable to learn about reality, and I support research into fundamental physics, biology, history, mathematics, ethics etc for that reason. I think it would be intellectually impoverishing to only support research that has immediate and foreseeable practical benefits. But fortunately knowledge creation also has enormous instrumental value. So it’s not a one-or-the other thing.
I don’t see how that gets you out of facing the question
Check out chapter 13 in Beginning of Infinity when you can—everything I was saying in that post is much better explained there :)
Hey Mauricio! Two brief comments -
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology were never supposed to be frameworks for creating knowledge—they’re frameworks for turning knowledge into decisions, and your arguments don’t seem to be enough for refuting them as such.
Yes agreed, but these two things become intertwined when a philosophy makes people decide to stop creating knowledge. In this case, it’s longtermism preventing the creation of moral and scientific knowledge by grinding the process of error correction to a halt, where “error correction” in this context means continuously reevaluating philanthropic organizations based on their near and medium term consequences, in order to compare results obtained against results expected.
Vaden offers a promising approach to making decisions, but it just passes the buck on this—we’ll still need an answer to my question when we get to his step 2
Both approaches pass on the buck, that’s why I defined ‘creativity’ here to mean: ‘whatever unknown software the brain is running to get out of the infinite regress problem.’ And one doesn’t necessarily need to answer your question, because there’s no requirement that the criticism take EV form (although it can).
Yes! Exactly! Hence why I keep bringing him up :)
I don’t see how we could predict anything in the future at all (like the sun’s existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions?
Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it’s far more comprehensive than anything I could write here :) This question (and the questions re. climate change Max asked in another thread) are the focus of Popper’s book The Poverty of Historicism, where “historicism” here means “any philosophy that tries to make long-term predictions about human society” (i.e marxism, fascism, malthusianism, etc). I’ve attached a screenshot for proof-of-relevance:
(Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)