Personally I think equating strong longtermism with longtermism is not really correct.
Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)
Personally I think equating strong longtermism with longtermism is not really correct.
Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)
If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff
Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and always ways we could improve the theories.
I don’t see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give you a number and call it “my belief that we’ll go extinct tomorrow,” but this doesn’t get us anywhere. The question is whether it’s accurate—and what accuracy means in this case.) What would be the parameters of such a model? There are uncountably many things—most of them unknowable—which could affect such an outcome.
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I’m not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
I would note that the Greaves and MackAskill paper actually has a section putting forward ‘advancing progress’ as a plausible longtermist intervention!
Yeah—but I found this puzzling. You don’t need longtermism to think this is a priority—so why adopt it? If you instead adopt a problem/knowledge focused ethics, then you get to keep all the good aspects of longtermism (promoting progress, etc), but don’t open yourself up to what (in my view) are its drawbacks. I try to say this in the “Antithesis of Moral Progress” section, but obviously did a terrible job haha :)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
There are non-measurable sets (unless you discard the axiom of choice, but then you’ll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism.
If you’re not taking a measure theoretic approach, and instead using propositions (which I guess, it should be assumed that you are, because this approach grounds Bayesianism), then using infinite sets (which clearly one would have to do if reasoning about all possible futures) leads to paradoxes. As E.T. Jaynes writes in Probability Theory and the Logic of Science:
It is very important to note that our consistency theorems have been established only for probabilities assigned on finite sets of propositions … In laying down this rule of conduct, we are only following the policy that mathematicians from Archimedes to Gauss have considered clearly necessary for nonsense avoidance in all of mathematics. (pg. 43-44).
(Vaden makes this point in the podcast.)
What I meant by this was that I think you and Ben both seem to assume that strong longtermists don’t want to work on near-term problems. I don’t think this is a given (although it is of course fair to say that they’re unlikely to only want to work on near-term problems).
Mostly agree here—this was the reason for some of the (perhaps cryptic) paragraphs in the Section “the Antithesis of Moral Progress.” Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems, then I’m not sure what longtermism is actually adding.
Also, just a nitpick on terminology—I dislike the term “near-term” problems, because it seems to imply that there is a well-defined class of “future” problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I think the latter is actively harmful.
Thanks AGB, this is helpful.
I agree that longtermism is core part of the movement, and probably commands a larger share of adherents than I imply. However, I’m not sure to what extent strong longtermism is supported. My sense is that while most people agree with the general thrust of the philosophy, many would be uncomfortable with “ignoring the effects” of the near term, and remain focused on near-term problems. I didn’t want to claim that a majority of EAs supported longtermism broadly-defined, but then only criticize a subset of those views.
I hadn’t seen the results of the EA Survey—fascinating.
Thanks for the engagement!
I think you’re mistaking Bayesian epistemology with Bayesian mathematics. Of course, no one denies Bayes’ theorem. The question is: to what should it be applied? Bayesian epistemology holds that rationality consists in updating your beliefs in accordance with Bayes’ theorem. As this LW post puts it:
Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.
Next, it’s not that “Bayesianism is the right approach in these fields,” (I’m not sure what that means) it’s that Bayesian methods are useful for some problems. But Bayesianism falls short when it comes to explaining how we actually create knowledge. (No amount of updating on evidence + Newtonian mechanics gives you relativity.)
Despite his popularity among scientists who get given one philosophy of science class.
Love the ad hominem attack.
If you deny that observations confirm scientific theories, then you would have no reason to believe scientific theories which are supported by observational evidence, such as that smoking causes lung cancer.
Smoking causes lung cancer is a hypothesis, smoking does not cause lung cancer is another. We then discriminate between the hypotheses based on evidence (we falsify incorrect hypotheses). We slowly develop more and more sophisticated explanatory theories of how smoking causes lung cancer, always seeking to falsify them. At any time, we are left with the best explanation of a given phenomenon. This is how falsification works. (I can’t comment on your claim about Popper’s beliefs—but I would be surprised if true. His books are filled with examples of scientific progress.)
If you deny the rationality of induction, then you must be sceptical about all scientific theories that purport to be confirmed by observational evidence.
Yes. Theories are not confirmed by evidence (there’s no number of white swans you can see which confirms that all swans are white. “Swans are white” is a hypothesis, which can be refuted by seeing a black swan), they are falsified by it. Evidence plays the role of discrimination, not confirmation.
Inductive sceptics must hold that if you jumped out of a tenth floor balcony, you would be just as likely to float upwards as fall downwards.
No—because we have explanatory theories telling us why we’ll fall downwards (general relativity). These theories are the only ones which have survived scrutiny, which is why we abide by them. Confirmationism, on the other hand, purports to explain phenomenon by appealing to previous evidence. “Why do we fall downwards? Because we fell downwards before”. The sun rising tomorrow morning does not confirm the hypothesis that the sun rises every day. We should not increase our confidence in the sun rising tomorrow because it rose yesterday. Instead, we have a theory about why and when the sun rises when it does (heliocentric model + axis-tilt theory).
Observing additional evidence in favour of the theory should not increase our “credence” in it. Finding confirming evidence of a theory is easy, as evidenced by astrology and ghost stories. The amount of confirmatory evidence for these theories is irrelevant, what matters is whether and by what they can be falsified. There are more accounts of people seeing UFOs than there are of people witnessing gamma ray bursts. According the confirmationism, we should thus increase our credence in the former, and have almost none in the existence of the latter.
If you haven’t read this piece on the failure of probabilistic induction to favour one generalization over another, I highly encourage you to do so.
Anyway, happy to continue this debate if you’d like, but that was my primer.
I don’t think the question makes sense. I agree with Vaden’s argument that there’s no well-defined measure over all possible futures.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn’t GiveWell make similar assumptions about the impacts of most of their recommended charities?
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you’re definitely right, we should always be skeptical of social science research—indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But there’s a massive difference between the assumptions driving GiveWell’s models and the assumptions required in the nuclear threat example.
Why are probabilities prior to action—why are they so fundamental? Could Andrew Wiles “rationally put probabilities” on him solving Fermat’s Last Theorem? Does this mean he shouldn’t have worked on it? Arguments do not have to be in number form.
Sure—Nukes exist. They’ve been deployed before, and we know they have incredible destructive power. We know that many countries have them, and have threatened to use them. We know the protocols are in place for their use.
Hi Michael!
It seems like you’re acting as if you’re confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I’m not sure you actually believe this. Do you?
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress.
It sounds like you’re skeptical of AI safety work, but it also seems what you’re proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if we’re committed to beliefs that would make us pessimistic about longtermism.
I’m not quite sure I’m following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and that’s inconsistent. (Happy to revise if this is unfair.) I don’t think I agree—would you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and that’s okay. Ignorance is par for the course. We don’t need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as I’ve argued, is one that focuses on correcting mistakes and solving the problems in front of us.
I think you do need to entertain arbitrary probabilities
… but they’d be arbitrary, so by definition don’t tell us anything about the world?
how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?
This is of course a difficult question. But I don’t think the answer is to assign arbitrary numbers to the consciousness of animals. We can’t pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that’s the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And that’s a massive, horrendous tragedy—one that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none.
Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don’t really even know what “wrong” would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.
Yeah I suppose I would still be skeptical of using ranges in the absence of data (you could just apply all my objections to the upper and low bounds of the range). But I’m definitely all for sensitivity analysis when there are data backing up the estimates!
I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don’t think cluelessness is particularly illuminating concept. I view it as a variant of “we can’t predict the future.” So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there’s going to be problems.
I would propose to resolve cluelessness as follows: Let’s admit we can’t predict the future. Our focus should instead be on error-correction. Our actions will have consequences—both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I’m quite sincere.)
Hey Fin! Nice—lot’s here. I’ll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point … you can defend longtermism’s honour).
In any case: declaring that BE “has been refuted” seems unfairly rash.
Yep, this is fair. I’m imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people’s attention. So yes—the language might be a little strong (although I do really think Bayesianism doesn’t stand up to scrutiny if you drill down on it).
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary.
Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can’t know and stop trying.
If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity’s future. So there’s some information to go on — just very little.
Bit of a nitpick here, but space colonization isn’t prohibited by the laws of physics, so it can only be “practically impossible” based on our current knowledge. It’s just a problem to be solved. So this particular example couldn’t bring down the curtains on our expected value calculations.
Really? If you’re a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other’s beliefs, then shouldn’t we be able to argue towards closer agreement?
I don’t think so. There’s no data on the problem, so there’s nothing to adjudicate between our disagreements. We can honestly try this if you want. What’s your credence?
Now, even if we could converge on some number, what’s the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can’t use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right?
What are the available facts when it comes to the size of the future? There’s a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I’m gonna add mine in: 10^124 + 3.
The response is presumably: “sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I’m trying to represent something about my own beliefs — not that I know something precise about the actual world.”
Agree that this is probably the response. But then we need to be clear that these estimates aren’t saying “anything precise about the actual world.” They should be treated completely differently than estimates based on actual data. But they’re not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Where there’s lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the ‘actual data’.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn’t lend this any credibility. It doesn’t belong alongside an actual statistical estimate.
However, if you agree that subjective credences are applicable to innocuous ‘short-term’ situations with plenty of ‘data’, then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future.
I think the above also answers this? Subjective credences aren’t applicable to short term situations. (Again, when I say “subjective” there’s an implied “and based on no data”).
Isn’t it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
I’ve seen arguments to the contrary. Here for instance:
I spoke to one EA who made an argument against slowing down AGI development that I think is basically indefensible: that doing so would slow the development of machine learning-based technology that is likely to lead to massive benefits in the short/medium term. But by the own arguments of the AI-focused EAs, the far future effects of AGI dominate all other considerations by orders of magnitude. If that’s the case, then getting it right should be the absolute top priority, and virtually everyone agrees (I think) that the sooner AGI is developed, the higher the likelihood that we were ill prepared and that something will go horribly wrong. So, it seems clear that if we can take steps to effectively slow down AGI development we should.
There’s also the quote by Toby Ord (I think?) that goes something like: “We’ve grown technologically mature without acquiring the commensurate wisdom.” I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems.
When you believe the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: “Look, longtermism doesn’t mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics”. But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don’t, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides, and no benefits that aren’t there in other philosophies. (Okay, harsh words to end, I know—but if anyone is still reading at this point I’m surprised ;) )
I’m tempted to just concede this because we’re very close to agreement here.
For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.
If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about “AI controlled police and armies”), then I have much less of a problem with longtermism. People can use whatever method they want to decide which problems they want to work on (I’ll leave the prioritization to 80K :) ).
I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention—perhaps over 99% -- would go to things that look like problems already.
Just apply my critique to the x% of attention that’s spent worrying about non-problems. (Admittedly, of course, this world is better than the one where 100% of attention is on non-existent possible future problems.)
Hi Linch!
I’d rather not rely on the authority of past performance to gauge whether someone’s arguments are good. I think we should evaluate the arguments directly. If they are, they’ll stand on their own regardless of someone’s prior luck/circumstance/personality.
I would actually argue that it’s the opposite of epistemic anarchy. Admitting that we can’t know the unknowable changes our decision calculus: Instead of focusing on making the optimal decision, we recognize that all decisions will have unintended negative consequences which we’ll have to correct. Fostering an environment of criticism and error-correction becomes paramount.
I’d disagree. I think trying to place probabilities on inherently unknowable events lends us a false sense of security.
(All said with a smile of course :) )