Evidence, cluelessness, and the long term—Hilary Greaves
Hilary Greaves is a professor of philosophy at the University of Oxford and the Director of the Global Priorities Institute. This talk was delivered at the Effective Altruism Student Summit in October 2020.
This transcript has been lightly edited for clarity.
Introduction
My talk has three parts. In part one, I’ll talk about three of the basic canons of effective altruism, as I think most people understand them. Effectiveness, cost-effectiveness, and the value of evidence.
In part two, I’ll talk about the limits of evidence. It’s really important to pay attention to evidence, if you want to know what works. But a problem we face is that evidence can only go so far. In particular, I argue in the second part of my talk that most of the stuff that we ought to care about is necessarily stuff that we basically have no evidence for. This generates the problem that I call ‘cluelessness’.
And in the third part of my talk, I’ll discuss how we might respond to this fact. I don’t know the answer and this is something that I struggle with a lot myself, but what I will do in the third part of the talk is I’ll lay out five possible responses and I’ll at least tell you what I think about each of those possible responses.
Part one: effectiveness, cost-effectiveness, and the importance of evidence.
Effectiveness
So firstly, then, effectiveness. It’s a familiar point in discussions of effective altruism and elsewhere that even most well-intentioned interventions don’t in fact work at all, or in some cases, they even do more harm than good, on net.
One example (which may be familiar to many of you already) is that of Playpumps. Playpumps were supposed to be a novel way of improving access to clean water across rural Africa. The idea is that instead of the village women laboriously pumping the water by hand themselves, you harness the energy and enthusiasm of youth to get children to play on a roundabout; and the turning of the roundabout is what pumps the water.
This perhaps seemed like a great idea at the time, and millions of dollars were spent rolling out thousands of these pumps across Africa. But we now know that, well intentioned though it was, this intervention does more harm than good. The Playpumps are inferior to the original hand pumps that they replaced.
For another example, one might be concerned to increase school attendance in poor rural areas. To do that, one starts thinking about: “Well, what might be the reasons children aren’t going to school in those areas?” And there are lots of things you might think about: maybe because they’re so poor they’re staying home to work for the family instead, in which case perhaps sponsoring a child so they don’t have to do that would help. Maybe they can’t afford the school uniform. Maybe they’re teenage girls and they’re too embarrassed to go to school if they’ve got their period because they don’t have access to adequate sanitary products. There could be lots of things.
But let’s seize on that last one, which seems like a plausible thing. Maybe their period is what’s keeping many teenage girls away from school. If so, then one might very well think distributing free sanitary products would be a cost-effective way of increasing school attendance. But at least in one study, this too turns out to have zero net effect on the intended outcome. It has zero net effect on child years spent in school. That’s maybe surprising, but that’s what the evidence seems to be telling us. So many well-intentioned interventions turn out not to work.
Cost-effectiveness
Secondly, though, comes cost-effectiveness: even amongst the interventions that do work, there’s an enormous variation in how well they work.
If you have a fixed sized pot of altruistic resources, which all of us do (nobody has infinite resources), then you face the question of how to do the most good that you can per dollar of your resources. And so you need to know about cost-effectiveness. You need to know about which of the possible interventions that you might fund with your altruistic dollars will do the most good, per dollar.
And even within a given cause area, for example, within the arena of global health, we typically see a cost-effectiveness distribution like the one in this graph.
So this is a graph for global health. Most interventions don’t work very well, if at all. They’re bunched down there on the left hand side of the graph. But if you choose carefully, one can find things that are many hundreds of times more cost-effective than the median intervention. So if you want to do the most good with your fixed pot of resources, it’s crucial, then, to focus not only on what works at all, but also on what works best.
The importance of evidence
This then leads us naturally onto the third point: the importance of evidence.
The world is a complicated place. It’s very hard to know a priori which interventions are going to cause which outcomes. We don’t know all the factors that are in play, particularly if we’re going in as foreigners to try and intervene in what’s going on in a different country.
And so if you want to know what actually works, you have to pay a close attention to the evidence. Ideally, perhaps randomised controlled trials. This is analogous to a revolution that’s taken place in medicine, for great benefit of the world over the past 50 years or so, where we replace a paradigm where treatments used to be decided mostly on the basis of the experience and intuition of the individual medical practitioner. We’ve moved away from that model and we’ve moved much more towards evidence-based medicine, where treatment decisions are backed up by careful attention to randomised controlled trials.
Much more recently, in the past ten or fifteen years or so, we’ve seen an analogous revolution in the altruistic enterprise spearheaded by such organisations as GiveWell, which pay close attention to randomised controlled trials to establish what works in the field of altruistic endeavour.
This is a great achievement and nothing in my talk is suppose to move away from the basic observation that this is a great achievement. Indeed, my own personal journey with effective altruism started when I realised that there were organisations like GiveWell doing this.
(The organisers of this conference asked me to try and find a photo of myself as a student. I’m not sure that digital photography had actually been invented yet when I was a student. So all I have along those lines is some negatives lying up in my loft somewhere. But anyway, here’s a photo of me as a relatively youthful college tutor, perhaps ten or fifteen years ago.)
I was at dinner in my college with one of my students. I was discussing the usual old chestnut worries about aid not working: culture of dependency, wastage and so forth. And I mentioned that even though, like the rest of us I feel my middle class guilt, I feel like as a rich Westerner I really should be trying to do something with some of my resources to make the world better.
I was so plagued by these worries, by ineffectiveness, that I basically wasn’t donating more than 10 or 20 pounds a month at that point. And it was when my student turned round to me and said, basically: GiveWell exists; there are people who have paid serious attention to the evidence, thought it all through, written up their research; you can be pretty confident of what works actually, if you just read this website. That, for me, was the turning point. That was where I started feeling “OK, I now feel sufficiently confident that I’m willing to sacrifice 10 percent of my salary or whatever it may be”.
And again, that observation is still very important for me. Nothing in this talk is meant to be backing away from that. It’s important to highlight that because it’s going to sound as though I am backing away from that in what follows. What I want to do is share with you some worries that I think we should all face up to.
Part two: the limits of evidence
So here we go. Part two: the limits of evidence.
In what I’ll call a ‘simple cost-effectiveness analysis’, one only measures the immediate intended effect of one’s intervention. So, for example, if one’s talking about water pumps, you might have a cost-effectiveness analysis that tries to calculate how many litres of dirty water consumption are replaced by litres of clean water consumption per dollar spent on the intervention. If we’re talking about distributing insecticide treated bed nets in malarial regions, then we might be looking at data that tells us how many deaths are averted per dollar spent on bed net distribution. If it’s child years spent in school, well, then the question is by how much do we increase child years spent in school, per dollar spent on whatever intervention it might be.
Once you’ve answered that question, then in the usual model, you go about doing two kinds of comparison. You do your intra-cause comparison. That is to say, insofar as our focus is (for example) child years spent in school, which intervention increases that thing the most, per dollar donated?
And of course, since we also want to know whether we should be focusing on child years spent in school or instead on something else like water consumption, we want to do cross-cause comparisons which tell us—on the basis of some admittedly much trickier but reasonable, well thought through theoretical model—how we should trade off additional child years spent in school against improvements in clean water consumption. How many litres increase in clean water consumption is equivalent from the point of view of good done to an increase of, say, one child year spent in school?
Knock-on effects and side effects
Let’s suppose we can do all those things (there are questions about how you do it, particularly in the case of cross-cause comparisons, but those are not the focus of my talk). What I want to focus on here is what’s left out by those simple cost-effectiveness analyses. There are two kinds of effects of our interventions that aren’t counted, if we just do the kind of thing that I described on the previous slide.
There’s what I’ll call ‘knock-on effects’, or perhaps sometimes called ‘flow-through effects’, on the one hand; and then there are side effects. Knock-on effects are effects that are causally downstream of the intended effect. So you have some intervention, (say) whose intended effect is an increase in child years spent in school. Increasing child years spent in school itself has downstream further consequences not included in the basic calculation. It has downstream consequences, for example, on future economic prosperity. Perhaps it has downstream consequences on the future political setup in the country.
There are also side effects. These are effects that are effects of the intervention, but they don’t go via the intended effect, so they have some other causal route. For example, in the context of things like provision of healthcare services by Western funded charities, many people have worried that having rich Westerners come in and fund frontline health services via charities might decrease the tendency of the local population to lobby their own governments for adequate health services. And so this well-intentioned effect of providing healthcare might have adverse political consequences.
Now, in both of these cases, both in the case of the knock-on effects and in the case of the side effects, we have effects rippling on, in principle, down the centuries, even down the millennia.
So in terms of this picture, if you like, the paddleboard in the foreground represents the intended effect. You can have some effect on that part of the river immediately. That’s the bit that we’re measuring in our simple cost-effectiveness analysis. But in principle, in both the cases of knock-on effects and in the cases of side effects, there are also effects further on into the distant parts of that river, and even over there in the distant mountains that we can only dimly see.
Cluelessness
OK, so there are all these unmeasured effects not included in our simple cost-effectiveness analysis. I want to make three observations about those unmeasured effects. Firstly, I’ll claim here (and I’ll say more about it in a minute), I claim that the unmeasured effects are almost certainly greater in aggregate than the measured effects. And I don’t just mean ex post this is likely to be the case; I mean that, according to reasonable credences even in terms of expected value, the unmeasured effects are likely to dominate the calculation, if you’re trying to calculate (even in expected terms) all of the effects of your intervention.
The second observation is that these further future (causally downstream or otherwise) events are much harder to estimate. In fact, they’re really hard to estimate; they’re much harder to estimate, anyway, than the near-term effects. That’s because, for example, you can’t do a randomised controlled trial to ascertain what the effect of your intervention is going to be in 100 years. You don’t have that long to wait.
The third observation is that even these further future and relatively unforeseeable effects, in principle, matter from an altruistic point of view, just as much as the near-term effects. The mere fact that they’re remote in time shouldn’t mean that we don’t care about them. If you need convincing on that point, here’s a little thought experiment. Suppose you had in front of you right now a red button and suppose for the sake of argument, you knew (never mind how) that the effect of your pressing this button here and now would be a nuclear explosion going off in two thousand years time, killing millions of people. I take it you would have overwhelming moral reason, if you knew that were the case, not to press the red button. So what that thought experiment is supposed to show is that the mere fact that these people—the hypothetical victims of your button pressing—are remote from you in time and that you have no other personal connection to to them, those facts don’t diminish the moral significance of the effects.
What do we get when we put all those three observations together? Well, what I get is a deep seated worry about the extent to which it really makes sense to be guided by cost-effectiveness analyses of the kinds that are provided by meta-charities like GiveWell. If what we have is a cost-effectiveness analysis that focuses on a tiny part of the thing we care about, and if we basically know that the real calculation—the one we actually care about—is going to be swamped by this further future stuff that hasn’t been included in the cost-effectiveness analysis; how confident should we be really that the cost-effectiveness analysis we’ve got is any decent guide at all to how we should be spending our money? That’s the worry that I call ‘cluelessness’. We might feel clueless about how to spend money even after reading GiveWell’s website.
Five possible responses to cluelessness
So there’s the worry. And now let me sketch five possible responses to that worry. The first one I mention only to set aside. The other four I want to take someone seriously in each case.
Response one: Make the analysis more sophisticated
So the response I want to set aside is the thought that “maybe all this shows that we need to make the cost-effectiveness analysis a little bit more sophisticated”. If the problem was that our cost-effectiveness analysis of, say, bed net distribution only counted deaths averted, and we also cared about things like effects on economic prosperity in the next generation and political effects and so forth; doesn’t that just show (the thought might run) that we need to make our analysis more complicated so that includes those things as well?
Well, that’s certainly an improvement and very much to their credit this is something that GiveWell has done. If you go to their website, you can download their cost-effectiveness analyses back as far as 2012, and for every year since then. And in particular, say, if you look at the analyses for the Against Malaria Foundation (one of the top charities that distributes insecticide treated bed nets in malarial regions), you’ll see that the 2012 analysis basically just counts deaths averted in children under five, whereas the 2020 analysis includes a whole host of things beyond that. It includes morbidity effects, so effects of non-fatal illness from non-fatal cases of malaria. It includes effects on the prevention of stillbirths. It includes prevention of diseases other than malaria. And it includes reductions in treatment costs, if fewer people are getting sick then there’s less burden on the health service. So those are all things that might increase the cost-effectiveness of bed net distribution relative to the simple cost-effectiveness analysis. They also include some things that might decrease it, for example, decreases in immunity to malaria resulting from the intervention and increases in insecticide resistance in the mosquitoes.
So that’s definitely progress and GiveWell is very much to be applauded for having done this. But from the point of view of the thing that I’m worrying about in this talk it’s not really a solution. It only relatively slightly shifts the boundary between the things that we know about and the things that we’re clueless about. That is, it’s still going to be the case, even after you’ve done the most complicated, remotely plausible cost-effectiveness analysis, that you’ve said basically nothing about, say, effects on population size down the generations.
It’s perhaps worth pausing a bit on this point. Why do I still feel, even given the 2020 GiveWell analysis for AMF, that most of the things I care about, even in expected value terms, have been left out of the calculation?
Well, an easy way of seeing this is to consider, in particular, the case of population size. Okay, so, I fund some bed nets. Suppose that saves a life in the current generation. I can be pretty sure that one way or another, saving a life in the current generation is going to have an effect on population size in the next generation. Maybe it increases future population because, look, here’s an additional person who’s going to survive to adulthood. Statistically speaking, that person is likely to go on to have children. Maybe it actually decreases future population because there are well known correlations between reductions in child mortality rate and reductions in fertility. But either way, it seems very plausible that once I’ve done my research, then the expected effect on future population size will be non-zero.
But now let’s think about how long the future of humanity hopefully is. It’s not going to be just one further future generation. Nor is it going to be just two. At least, hopefully, if all goes well, there are thousands of future generations. And so, then, it seems extremely unlikely that the mere 60 (or so) life years I can gain in the life of the person whose premature death my bed net distribution has averted, is going to add up more in value terms overall than all those effects on population size I have down the millennia.
Now, I don’t know whether the further future population size effects are good or bad. That’s for two reasons. Firstly, I don’t know whether I’m going to increase or decrease future population. And secondly, even if I did, even if I knew, let’s say for the sake of argument, that I was going to be increasing future population size, I don’t know whether that’s going to be good or bad. There are very complicated questions here. I don’t know what the effect is of increasing population size on economic growth. I don’t know what the effect is on tendencies towards peace and cooperation versus conflict. And crucially, I don’t know what the effect is of increasing population size on the size of existential risks faced by humanity (that is, chances that something might go really catastrophically wrong, either wiping out the human race entirely, or destroying most of the value in the future of human civilisation). So, what I can be pretty sure about is that once I’ve thought things through, there will be a non-zero expected value effect in that further future; and that will dominate the calculation. But at the moment, at least, I feel thoroughly clueless about even the sign, never mind the magnitude, of those further future effects.
Okay, so the take home point from this slide is: sure, you can try and make your cost-effectiveness analysis more sophisticated and that’s a good thing to do—I very much applaud it—but, it’s not going to solve the problem I’m worrying about at the moment.
So, that’s the response I want to set aside. Let me tell you about the other four.
Response two: Give up the effective altruist enterprise
Second response: give up the effective altruist enterprise. This, I think, is a very common reaction indeed. I think, anecdotally, many people refrain from getting engaged in Effective Altruism in the first place because of worries like the ones I’m talking about in this talk—worries about cluelessness.
The line of thought would run something like this: look, when I was that college tutor, having that conversation with that student, when I felt really confident that I could be doing significant amounts of good per dollar donated, that was what motivated me to make big personal sacrifices in material terms to start giving away significant fractions of my salary. But if cluelessness worries have now undermined that, I no longer feel I have that certainty. Why then would I be donating 10 percent, 20 percent, 50 percent, or whatever, on something that I feel really, really clueless about, knowing that I could instead (say) be paying off my mortgage?
Okay, so I want to lay this response on the table, because it’s an important one. It’s an understandable one. It’s a common one. And it shouldn’t be just shamed out of the conversation. My own tentative view, and certainly my hope, is that this isn’t the right response. But for the rest of the talk, I’ll set that aside.
Response three: Make bolder estimates
What other responses might there be? The third response is to make bolder estimates. This picks up on the thread left hanging by that first response. The first response was: make the cost-effectiveness analysis a little bit more sophisticated. In this third response—making bolder estimates—the idea is: let’s do the uber-analysis that really includes everything we care about down to the end of time.
So recall, two sections ago, I was worrying about distant future effects on population size and the value of changes to future population size. I said there were lots of difficult questions here. But in principle, one can build a model that takes account of all of those things. One could input into the model one’s best guesses about the sign of the effects on future population size and about the sign and the magnitude of the value of a given change to future population size. Of course, in doing so, one would have to be making some extremely bold estimates, and have to take a stand on some controversial questions. They’d be questions where there’s relatively little guidance from evidence, and one feels much more that one’s guessing. But if this is what we’ve got to do in order to make well thought through funding decisions, perhaps this is just what we’ve got to do, and we should get on with doing it.
Well, I think there are probably some people in the effective altruist community who are comfortable with doing that. But for my own part, I want to confess to some profound discomfort. To bring out why I feel that discomfort, I think it’s helpful to think about both intra-personal (so, inside my own head) issues that I face when I contemplate doing this analysis and also about inter-personal issues.
The intra-personal issue is this: Okay, so I tried doing this uber-analysis; I come up with my best guess about the sign of the effect on future population and so forth; and I put that into my analysis. Suppose the result is I think funding bed nets is robustly good because it robustly increases future population size, and that in turn is robustly good.
Suppose that’s my personal uber-analysis. I’m not going to be able to shake the feeling that when I wrote down that particular uber-analysis, I had to make some really arbitrary decisions. It was pretty arbitrary, perhaps, that I came down on the side of increasing population size being good rather than bad. I didn’t really have any idea. I just felt like I had to make a guess for the purpose of the analysis. And so here I am, I’ve reached this conclusion that I should be spending, say, 20 percent of my salary on increasing future population size via bed nets or otherwise. But I really know at the back of my mind, if I’m honest with myself, that I could equally well have made the opposite arbitrary choice and chosen the estimate that said increasing future population size is bad. I should instead be spending 20 percent of my salary on decreasing future population size. So, the cluelessness worry here is: How confident can I feel? How sensible can I feel going all out to increase future population size—perhaps via bed nets or, more plausibly, perhaps via some other route—when I know that the thing that led me to choose that conclusion rather than the opposite one was really arbitrary.
The inter-personal point is closely related. Suppose I choose to go all out on increasing future population size, and you choose to go all out on decreasing future population size. So here we both are, giving away such and such proportion of our salary to our chosen, supposedly altruistic, enterprises. But the two of us are just directly working against one another. We’re cancelling one another out. We would have done something much more productive if we got together and had a conversation and perhaps together decided to instead fund some third thing that at least the two of us could agree upon.
Response four: Ignore things that we can’t even estimate
Fourth response: Ignore things that we can’t even estimate. This one, too, I think is at least a very understandable response (at least, psychologically), although to me it doesn’t seem the right one. I’ll say a little bit about that here. I’ve said more in print, for example, in this paper that I’ve cited on this slide.
So the idea would be this: Okay, let’s consider the most sophisticated, plausible cost-effectiveness analysis. So we have some cost-effectiveness analysis, perhaps like the GiveWell 2020 analysis. It’s not the uber-analysis where we’ve gone crazy and started making guesses for things that we really have no clue about. It stopped at the point where we’re making some educated guesses and we can also do our sensitivity analysis to check that our important conclusions are not too sensitive to reasonable variations in the input parameters for this medium complexity cost-effectiveness model. Then the thought would be: what we should do is base our funding decisions on cost-effectiveness analyses of that type, just because it’s the best that we can do. So, if you like, we should look under the lamppost and ignore the darkness just because we can’t see into the darkness.
So, again, perhaps like the second response, this is one that I understand. I don’t think it’s right. I do think it’s very tempting, though. And for the purpose of this talk, I just want to lay it out there as an option.
Response five: “Go longtermist”
Finally, the response that’s probably my favourite one and the one that I’m personally most inclined towards. One might be driven by considerations of cluelessness to “go longtermist”, as it were. Let me say a bit more about what I mean by that. As many of you will probably be aware, there’s something of a division in the effective altruist community on the question of: In what cause area do there exist, in the world as we find it today, the most cost-effective opportunities to do good? In which cause area can you do the most good per dollar spent? Some people think the answer is global poverty, health and development. Some people think the answer is animal welfare. And a third contingent thinks the answer is what I’ll call ‘longtermism’, trying to beneficially influence the course of the very far future of humanity and more generally of the planets in the universe.
Considerations of cluelessness are often taken to be an objection to longtermism because, of course, it’s very hard to know what’s going to beneficially influence the course of the very far future on timescales of centuries and millennia. Again, we still have the point that we can’t do randomised controlled trials on those timescales.
However, what my own journey through thinking about cluelessness has convinced me, tentatively, is that that’s precisely the wrong conclusion. And in fact, considerations of cluelessness favour longtermism rather than undermining it.
Why would that be? Well, what seems to me to emerge from the discussion of interventions like funding bed nets is, firstly, we think the majority of the value of funding things like bed net distribution comes from their further future effects. However, in the case of interventions like that, we find ourselves really clueless about not only the magnitude, but even the sign of the value of those further future effects. This then raises the question of whether we might choose our interventions more carefully if we care in principle about all the effects of our actions until the end of time. But we’re clueless about what most of those effects are for things like bed net distribution.
Perhaps we could find some other interventions for which that’s the case to a much lesser extent. If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what we’re doing is beneficial and of how beneficial it is? I think the answer is yes.
And if we want to know what kinds of interventions might have that property, we just need to look at what people, in fact, do fund in the effective altruist community when they’re convince that longtermism is the best cause area. They’re typically things like reducing the chance of premature human extinction—the thought being that if you can reduce the probability of premature human extinction, even by a tiny little bit, then in expected value terms, given the potential size of the future of humanity, that’s going to be enormously valuable. (This has been argued forcefully by Nick Beckstead and by Nick Bostrom. Will MacAskill and I canvas some of the same arguments in our own recent paper.)
There are also interventions aimed at improving very long run average future welfare, conditional on the supposition that humanity doesn’t go prematurely extinct, perhaps by improving the content of key, long lasting political institutions.
So these are the kind of things that you can fund if you’re convinced, whether by the arguments that I’ve set out today or otherwise, that longtermism is the way to go. And, in particular, you might choose to donate to Effective Altruism’s Long Term Future Fund, which focuses on precisely these kinds of interventions.
Summary
In summary, then: In part one I talked about effectiveness, cost-effectiveness, and the importance of evidence. The point here was that altruism has to be effective. Most well-intentioned things don’t work. Even among the things that do work, some work hundreds of times better than others. And we have to pay attention to evidence if we want to know which are which.
In part two, though, I talked about the limits of this. The limits of evidence, where evidence gives out, and what it can’t tell us about. Here I worried about the fact that evidence, kind of necessarily, only tracks relatively near term effects. We can only gather evidence on relatively short timescales. And plausibly, I’ve argued, or at least suggested, that the bulk of even the expected value of our interventions comes from their effects on the very far future. That is: the things that are not measured in even the more complicated, plausible cost-effectiveness analysis.
Then in section three I talked about five possible responses to this fact. I said I think making the cost-effectiveness analyses somewhat more sophisticated only relocates the problem. That left four other responses: Give up effective altruism; do the uber-analysis; adopt a parochial form of morality where you only care about the near-term, predictable effects; or shift away from things like bed net distribution in favour of interventions that are explicitly aimed at improving, as much as we possibly can, the expected course of the very long run future.
I said that I myself am probably most sympathetic to that last response—the longtermist one—but I think there are very hard questions here. So actually, in my own case, the take home message for this is: we need to do a lot more thinking and research about this. And this motivates the enterprise that we call global priorities research, bringing to bear the tools of various academic disciplines—in particular at the moment, in the case of my own institute, economics and philosophy—to think carefully through issues like this and try to get to a point where we do feel less clueless.
- Why I No Longer Prioritize Wild Animal Welfare by 15 Feb 2023 12:11 UTC; 321 points) (
- Important Between-Cause Considerations: things every EA should know about by 28 Jan 2021 19:56 UTC; 112 points) (
- Tensions between different approaches to doing good by 19 Mar 2023 18:14 UTC; 109 points) (
- Writing about my job: Research Fellow, FHI by 29 Jul 2021 13:53 UTC; 96 points) (
- EA Infrastructure Fund: May 2021 grant recommendations by 3 Jun 2021 1:01 UTC; 92 points) (
- Possible misconceptions about (strong) longtermism by 9 Mar 2021 17:58 UTC; 90 points) (
- GiveWell Misuses Discount Rates by 27 Oct 2022 6:57 UTC; 73 points) (
- Lobbying governments to improve wild animal welfare by 2 Aug 2022 6:21 UTC; 70 points) (
- Crucial considerations in the field of Wild Animal Welfare (WAW) by 10 Apr 2022 19:43 UTC; 63 points) (
- 12 Feb 2021 13:45 UTC; 60 points) 's comment on Complex cluelessness as credal fragility by (
- Introducing the Existential Risks Introductory Course (ERIC) by 19 Aug 2022 15:57 UTC; 57 points) (
- Blueprints (& lenses) for longtermist decision-making by 21 Dec 2020 17:25 UTC; 49 points) (
- Finding bugs in GiveWell’s top charities by 23 Jan 2023 16:49 UTC; 46 points) (
- Elementary Infra-Bayesianism by 8 May 2022 12:23 UTC; 41 points) (LessWrong;
- Nick Bostrom: An Introduction [early draft] by 31 Jul 2021 17:04 UTC; 38 points) (
- 11 Oct 2023 23:00 UTC; 30 points) 's comment on Causes and Uncertainty: Rethinking Value in Expectation by (
- Diagnosing EA Research- Are stakeholder-engaged methods the solution? by 27 Jan 2023 14:38 UTC; 28 points) (
- 17 Oct 2022 15:52 UTC; 28 points) 's comment on Ask Charity Entrepreneurship Anything by (
- If someone identifies as a longtermist, should they donate to Founders Pledge’s top climate charities than to GiveWell’s top charities? by 26 Nov 2020 7:54 UTC; 19 points) (
- An inner debate on risk aversion and systemic change by 2 Feb 2021 22:31 UTC; 18 points) (
- 13 Feb 2021 11:49 UTC; 15 points) 's comment on Complex cluelessness as credal fragility by (
- Re. Longtermism: A response to the EA forum (part 2) by 1 Mar 2021 18:13 UTC; 15 points) (
- The Charlemagne Effect: The Longtermist Case For Neartermism by 25 Jul 2022 8:12 UTC; 15 points) (
- Three ways anyone can make a difference, no matter their job by 15 Apr 2017 22:19 UTC; 14 points) (
- Are there any uber-analyses of GiveWell/ACE top charities? by 16 Apr 2022 10:42 UTC; 14 points) (
- Why I am happy to reject the possibility of infinite worlds by 25 Dec 2022 19:51 UTC; 13 points) (
- 10 Apr 2022 12:01 UTC; 9 points) 's comment on Sophie’s Choice as a time traveler: a critique of strong longtermism and why we should fund more development. by (
- 25 Jan 2023 19:30 UTC; 9 points) 's comment on My thoughts on parenting and having an impactful career by (
- Are there important things that aren’t quantifiable? by 12 Sep 2022 11:49 UTC; 9 points) (
- 26 Mar 2022 21:42 UTC; 9 points) 's comment on Critique of OpenPhil’s macroeconomic policy advocacy by (
- Introducing the Existential Risks Introductory Course (ERIC) by 19 Aug 2022 15:54 UTC; 9 points) (LessWrong;
- 12 Oct 2023 18:50 UTC; 8 points) 's comment on Causes and Uncertainty: Rethinking Value in Expectation by (
- 23 Jan 2023 22:48 UTC; 7 points) 's comment on Finding bugs in GiveWell’s top charities by (
- 13 Jan 2021 23:05 UTC; 7 points) 's comment on A Funnel for Cause Candidates by (
- Summary: Evidence, cluelessness and the long term | Hilary Greaves by 15 Apr 2022 17:22 UTC; 6 points) (
- 8 Nov 2020 22:18 UTC; 5 points) 's comment on Introducing Probably Good: A New Career Guidance Organization by (
- 6 Jun 2021 9:05 UTC; 5 points) 's comment on EA Infrastructure Fund: Ask us anything! by (
- 9 Feb 2021 14:12 UTC; 5 points) 's comment on Complex cluelessness as credal fragility by (
- Food for Thought 7: Do we Have a Clue? by 12 Oct 2023 7:28 UTC; 5 points) (
- 23 Dec 2020 18:47 UTC; 4 points) 's comment on A case against strong longtermism by (
- 23 Jan 2021 11:23 UTC; 3 points) 's comment on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be by (
- 18 Apr 2022 6:46 UTC; 3 points) 's comment on A Complete Quantitative Model for Cause Selection by (
- 13 Apr 2022 18:10 UTC; 1 point) 's comment on Sophie’s Choice as a time traveler: a critique of strong longtermism and why we should fund more development. by (
- 18 Apr 2022 14:43 UTC; 1 point) 's comment on What is the expected effect of poverty alleviation efforts on existential risk? by (
- 11 Jan 2021 13:15 UTC; 1 point) 's comment on Big List of Cause Candidates by (
- 19 Oct 2022 16:49 UTC; 1 point) 's comment on Shallow Report on Nuclear War (Abolishment) by (
- 24 Jun 2023 14:08 UTC; 1 point) 's comment on [Linkpost] My doubts about longtermism by (
- 3 Aug 2021 9:20 UTC; 1 point) 's comment on Hilary Greaves: Evidence, cluelessness and the long term by (
- Reflection—Growth and the case against randomista development—EA Forum by 28 Jul 2022 23:38 UTC; -16 points) (
My own skepticism of longtermism stems from a few main considerations:
I often can’t tell longtermist interventions apart from Play Pumps or Scared Straight (an intervention that actually backfired). At least for these two interventions, we measured outcomes of interest and found they they didn’t work or were actively harmful. By the nature of many proposed longtermist interventions, we often can’t get good enough feedback to know we’re doing more good than harm or much of anything at all.
Many specific proposed longtermist interventions don’t look robustly good to me, either (i.e. their expected value is either negative or it’s a case of complex cluelessness, and I don’t know the sign). Some of this may be due to my asymmetric population ethics. If you aren’t sure about your population ethics, check out the conclusion in this paper (although you might need to read some more or watch the talk for definitions), which indicates quite a lot of sensitivity to population ethics.
I’m not convinced that we can ever identify robustly positive longtermist interventions, essentially due to 1, or that what I could do would actually support robustly positive longtermist interventions according to my views (or views I’d endorse upon reflection). GPI’s research is insightful, impressive and has been useful to me, but I don’t know that supporting it further is robustly positive, since I am not the only one who can benefit from it, and others may use it to pursue interventions that aren’t robustly positive to me.
Tentatively, I’m hopeful we can hedge with a portfolio of interventions, shorttermist or longtermist or both. If you’re worried about population effects of AMF, you could pair it with a family planning charity. If you’re worried about economic effects, too, I don’t know what to do for that. I don’t know that it’s always possible to come up with a portfolio that manages side effects and all these different considerations well enough that you should be confident it’s robustly positive. I wrote a post about this here.
A portfolio containing animal advocacy, s-risk work and research on and advocacy for suffering-focused views seems like it would be my best bet.
Also, I think it’s plausible that extinction is good for symmetric views like classical utilitarianism, too. S-risks could end up dominating.
See the comments on this article. Also this.
I feel like you’re probably too sceptical about the possibility of us ever knowing if longtermist interventions are positive. You say we can’t get feedback on longtermist interventions, and that is certainly true, but presumably later generations will be able to evaluate our current long-termist efforts and determine if they were good or not. Or do you doubt this as well?
On a slightly similar note I know that Will MacAskill has argued that we should prevent human extinction on the basis of option value, and that this holds even if we think we would rather humanity go extinct. Granted this argument does depend on global priorities research making progress on key questions. Do you have any thoughts on this argument?
I’ve sometimes wondered about this, but I’m not sure how it gets past the objection to Response 1. In 1000 years’ time, people will (at best!) be able to measure what the 1000-year effects were of our actions today. But aren’t we still completely clueless as to what the long-term effects of those actions are?
Not sure, maybe. The way I think about it is historians in a few thousand years could study say an institution we create now and try to judge if it reduced the probability of some lock-in event e.g. a great power conflict. If they judge it did then the institution was a pretty good intervention. Of course they will never be able to know for sure if the institution avoided such a conflict, but I don’t think they would have to, they would just have to determine if the institution had a non-negligible effect on the probability of such a conflict. It doesn’t seem impossible to me that they might have something to say about that.
Of course there are some long-term effects we would remain clueless about e.g. “did creating the institution delay the conception of a person which lead to an evil person being conceived etc. etc.” but this is the sort of cluelessness that Greaves (2016) argues we can ignore as these effects are ‘symmetric across acts’ i.e. it was just as likely to happen if we hadn’t created the institution.
Ya, I’m skeptical of this, too. I’m skeptical that we can collect reliable evidence on the necessary scale and analyze it in a rigorous enough way to conclude much. Experimental and quasi-experimental studies on a huge scale (we’re talking astronomical stakes for longtermism, right?) don’t seem possible, but maybe? Something like this might be promising, but it might not help us weigh important considerations against each other.
I think it’s plausible, but at what point can we say it’s outweighed by other considerations? Why isn’t it now? I’d say it’s a case of complex cluelessness for me.
I haven’t actually read the whole essay by Will but I think the gist is we should avert extinction if:
We are unsure about whether extinction is good or bad / how good or how bad it is
We expect to be able to make good progress on this question (or at least that there’s a non-negligible probability that we can)
Given the current state of population ethics I think the first statement is probably true. Credible people have varying views (totalism, person-affecting, suffering-focused etc.) that say different things about the value of human extinction.
Statement 2 is slightly more tricky, but I’m inclined to say that there is a non-negligible change of us making good progress. In the grand scheme of things population ethics is a very, very new discipline (I think it basically started with Parfit’s Reasons and Persons?) and we’re still figuring some of the basics out.
So maybe if in a few hundred years we’re still as uncertain about population ethics as we are now, the argument for avoiding human extinction based on option value would disappear. As it stands however I think the argument is fairly compelling.
So my counterargument is just that extinction is plausibly good in expectation on my views, so reducing extinction risk is not necessarily positive in expectation. Therefore it is not robustly positive, and I’d prefer something that is. I actually think world destruction would very likely to be good, with only concerns for aliens as a reason to avoid it, which seems extremely speculative, although I suppose this might also be a case of complex cluelessness, since the stakes are high with aliens, but dealing with aliens could also go badly.
I’m a moral antirealist, and I expect I would never endorse a non-asymmetric population ethics. The procreation asymmetry (at least implying good lives can never justify even a single bad life) is among my strongest intuitions, and I’d sooner give up pretty much all others to keep it and remain consistent. Negative utilitarianism specifically is my “fallback” view if I can’t include other moral intuitions I have in a consistent way (and I’m pretty close to NU now, anyway).
Hi Michael,
In What We Owe to the Future, Will suggests there are robustly good interventions in the areas of climate change (e.g. clean-tech innovation), biosecurity and pandemic preparedness (e.g. developping extremely reliable personal protective equipment), and general disaster preparedness (e.g. increasing food stockpiles). I agree none of these are completely robustly good, but think they are sufficiently so for one to consider them as positive.
Assuming total hedonic utilitarianism, what do think are the major arguments against longtermism besides ones related to cluelessness?
I think the main other argument besides cluelessness is that the probability of your individual impact is too low and longtermism is fanatical.
Also, expected utility maximization with an unbounded utility function may be formally irrational, so the typical argument for longtermism could just be unsound. See links in this footnote (footnote 10): https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we#fnwr2zqbfe17
Also this comment and the surrounding thread: https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=rE9eCFKrDm5xqpfTx
Thanks for sharing. In that case, if I understand correctly, the case for longtermism is pretty strong assuming:
Expectational total hedonistic utilitarianism (and therefore biting the bullet of the St. Petersburg paradox).
There are robustly positive longtermist interventions.
I assign credences of around 1 and 0.9 to these points, so I guess I have little choice but to accept longtermism.
Why maximize expected value of an unbounded utility function if it’s irrational? What other reasons do you have to do it over alternatives? Biting the bullet of St Petersburg doesn’t just mean accepting the lottery, it also means in principle paying to avoid learning information, and choosing options that are strictly dominated by others, so predictably losing. Or you have to think ahead and make commitments you’ll predictably later want to break. Maybe such cases won’t come up in practice, though.
Also, if you’re biting the bullet on expectational total hedonistic utilitarianism, infinities will dominate everything, and you should ignore anything that doesn’t have infinite EV. See also: https://forum.effectivealtruism.org/posts/qcqTJEfhsCDAxXzNf/what-reason-is-there-not-to-accept-pascal-s-wager?commentId=Ydbz56hhEwxg9aPh8
I think the problem is a bit worse than this?
If your decision procedure is “maximize the EV of an unbounded utility function,” you basically cannot make any decisions. After all, for any action you could take, there is an extremely low but still nonzero chance that the action is infinitely good, and a similarly low-but-nonzero chance that it is infinitely bad. Infinity minus infinity is undefined. So all actions have an undefined expected value.
I agree that all actions would have undefined EV (and a chance of positive infinity and a chance of negative infinity) under the standard extended real numbers. However, increasing the probability of positive infinity and decreasing the probability of negative infinity would extend expectationalism in that case, following from extended rationality axioms (without continuity) and still make sense.
You could also consider different ways of doing arithmetic with infinities to avoid things usually being undefined.
I agree the possibility of infinities does not imply actions will have undefined expected values. My comment here illustrates this.
I see now my reply just above misinterpreted of what you said, sorry. If I understand correctly, you were referring to what you mentioned here:
The 1st point is not a problem for me. For the reasons described in Ellis 2018, I do not think there are infinities.
As for the 2nd point, the definition of unbounded utilities Paul Christiano uses here and here involves “an infinite sequence of outcomes”. This point is also not a worry for me, as I do not think there are infinite sequences in the real world.
Similarly, I think zeros only exist in the sense of representing arbitralily small, but non-null values.
Do you just mean that you shouldn’t use 0 as a probability (maybe only for an event in a countable probability space)? I agree with that, which is called Cromwell’s rule.
(Or, are you saying zero can never accurately describe anything? Like the number of apples in my hand, or the number of dollars you have in a Swiss bank account? Or, based on your own claim, the number of infinite sequences that exist? The probability that “the number of things that exist and match definition X is 0” is in fact 0, for any X?)
I argue for infinite sequences in my other reply.
I would say 0 can be used to describe abstract concepts, but I do not think it can be observed in the real world. All measurements have a finite sensitivity, so measuring zero only means the variable of interest is smaller than the sensitivity of the measurement. For example, if a termometer of sensitivity 0.5 K, and range from 0 K to 300 K indicates 0 K, we can only say the temperature is lower than 0.5 K (we cannot say it is 0).
I agree 0 should not be used for real probabilities. Abstractly, we can use 0 to describe something impossible. For example, if X is a uniform distribution ranging from 0 to 1, the probability of X being between −2 and −1 is 0.
If I say I have 0 apples in my hands, I just mean 0 is the integer number which most accurately describes the vague concept of the number of apples in my hands. It is not indended to be exactly 0. For example, I may have forgotten to account for my 2 bites, which imply I only have 0.9 apples in my hands. Or I may only consider I have 0.5 apples in my hands because I am only holding the apple with one hand (i.e. 50 % of my 2 hands). Or maybe having refers to who bought the apples, and I only contributed to 50 % of the cost of the apple. In general, it looks like human language does not translate perfectly to exact numbers.
Thanks for challenging my assumptions!
Why would that be irrational? Intuitively, if one thinks maximising expected value is fine for non-tiny probabilities and non-astronomical values, the reasoning should extend to tiny probabilities and astronomical values.
When I say I have a credence of 1 on expectational total hedonistic utilitarianism (ETHU), I mean I can assume it to be exactly 1 in practice, and therefore consider true everything which follows from it without considering other reasons. I worry this sounds dismissive and overconfident. To be clear, my credences are rarely this close to 1, and I am very uncertain about what actions one should do in the real world. I just think the uncertainty is empirical (including uncertainty about the real-world heuristics which correlate with maximising expected total hedonic utility). Since most people have lower credences than me on ETHU, I guess I am understanding it in a more general way than the one described in the literature.
To clarify, by “biting the bullet of the St. Petersburg paradox”, I meant I am willing to maximise expected value under all and any conditions. I do not know what this implies in terms of accepting of rejecting the St. Petersburg Paradox:
If it involves money instead of utility, the expected value is finite (assuming utility increases with the logarithm of money), and one should not keep gambling forever.
In practice, there are physical limits to how much money/utility one can get (the universe has finite resources), so it only applies in its original form to thought experiments.
I think getting infite expected value would violate our current understanding of physics. Even if our current understanding is wrong, and it is possible to produce infinite value, actions producing infinite expected value may not be available. For example, when we assume the utility of an action can be modelled as a normal distribution, we are allowing for the possibility of negative and positive infinite utility. However, the expected value of the action is still finite (and equal to the mean of the distribution).
Moreover, if we had actions with infinite expected utility, we may still be able to decide which one is better as long as resources are finite. To illustrate, we can imagine 2 actions A and B with the following expected utilities:
E_A = (E_max—E)^-1.
E_B = (E_max—E)^-2.
E and E_max are the energy used and available to perform the actions. As E tends to E_max:
E_A → +inf.
E_B → +inf.
E_A/E_B → 0.
So, although the expected utility of both actions tends to infinity, we can still say B would be better than A.
In general, I do not understand why infinites are said to be problematic. Intuitively, I would expect indeterminations of the type inf/inf or inf—inf can be resolved analysing the generating functions. I may well be missing something.
As I said, I do not think the possibility of infinite value implies there are actions with infinite expected value, and, even if these exist, there would still be ones which are better than others.
The first sentence here is not true. The formula below is the PDF of a normal distribution:
f(x)=1σ√2πexp(−12(x−μσ)2)
The limit of f(x) as x approaches either ∞ or −∞ is zero.
Moreover, if the first sentence I quoted from your comment were true, there would be no way for the second sentence to be true. This is the definition of expected utility:
∑outcomesU(outcome)P(outcome)
Where U(outcome) is the utility of an outcome and P(outcome) is its probability.
If you have an unbounded utility function, and you have any probability greater than zero (say, 10−101010) that the outcome of your action has infinitely positive utility, and a similarly nonzero probability (say, 10−10101010) that it has infinitely negative utility, then the formula for expected utility simplifies to
∞⋅10−101010−∞⋅10−10101010=∞−∞
which is undefined.
Hi Fermi,
By “possibility of negative and positive infinite utility”, I meant there is a non-null probability of a negative or positive utility with arbitrarily large magnitude. I think infinite is often used as meaning arbitrarily large, but I see now that Michael was not using it that way. Sorry for my confusion, and thanks for clarifying!
I agree. In the 1st sentence, “infinite” was supposed to mean “arbitrarily large” (in which case the 2nd sentence would be true).
I shared some links upthread to arguments that expected utility maximization with an unbounded utility function is irrational. It can make you choose infinitely many of options that are definitely worse together, or without even dealing with infinitely many choices, make you averse to information and choose finite sequences of options that are stochastically dominated. All of this seems decision-theoretically irrational, and preventing such behaviour makes some of the main and strongest arguments for expected utility maximization, but with a bounded utility function.
I don’t think you should assume current physics is correct with 100% probability (e.g. we could always be wrong, and we’ve been wrong before), and even if it is, there are ways to get infinities or unbounded expected values, e.g. evidential decision theory and correlations with other agents in a spatially infinite universe, possibly quantum tunneling (or so I’ve heard).
On your specific approach for infinities, note that, in principle, the limits of ratios can be undefined even if the ratios are bounded (and even never approach 0). So you need to handle such cases. I think there are definitely some infinite cases you can extend to, but you typically need to pick an order according to which to sum things, which seems especially arbitrary and hard to do if you’re handling cases of creation of new universes, especially infinite universes. The results can be sensitive to which basically arbitrary order you choose. Other decisions theories and utility functions also need to deal with cases that involve physical infinities, although they can sometimes (and maybe usually) be ignored, while only infinities matter in practice on the natural extensions of the view you’re defending.
Thanks for taking the time to clarify, Michael!
I said:
I was missing that by “infinity” you literally meant infinity, whereas I interpreted it as arbitralily large, but finite. I have now checked in more detail the links, and see how infinities in the sense of ∞ can lead to problems. I will have to think more about this...
Ah ok, I was talking about both arbitrarily large but finite (unbounded) values and infinities as two separate issues, but both are related to fanaticism. Unbounded utilities (especially in cases with infinite or undefined expected values) seem irrational, while actual infinite utilities are more just technical problems that are hard to solve non-arbitrarily. The links I shared are mostly about unbounded utilities, but this one discusses infinities: https://forum.effectivealtruism.org/posts/qcqTJEfhsCDAxXzNf/what-reason-is-there-not-to-accept-pascal-s-wager?commentId=Ydbz56hhEwxg9aPh8
The definition of unbounded utilities Paul Christiano uses here and here involves “an infinite sequence of outcomes”. I do not think infinite sequences exist in the real world, so I also think unbounded utilities are irrational.
I don’t think this is a valid inference, since there are other ways to define unbounded utilities, e.g. directly with an unbounded real-valued utility function, and the definitions don’t require infinite sequences to actually exist in the real world. However, I suspect all ways of showing unbounded utilities are irrational require infinite sequences, e.g. even St. Petersburg’s lottery is defined with an infinite sequence.
Also, I don’t think you should assign probability 1 to unbounded sequences not existing. In fact, I think some infinite sequences are more likely than not to actually exist, because the universe is probably unbounded in spatial extent, and there are infinitely many agents and moral patients in the universe in infinitely many different locations (although perhaps they’re all “copies” of finitely many different individuals). And for any proposed time bound for our future, there’s also nonzero chance that there will be moral patients past it.
I got that impression too.
According to this article from Toby Ord (see Figure 15), “under the most widely
accepted cosmological modell (ΛCDM)”:
“The part of the universe we can causally affect” (affectable universe) has a radius of 16.5 Gly.
“The part of the universe which can ever have any kind of causal connectedness to our location” has a radius of 125.8 Gly.
There are (abstract) models under which the universe is infinite (see section “What if ΛCDM is wrong?”):
“A useful way of categorising the possibilities concerns the value of an unknown parameter, w. This is the parameter in the ‘equation of state’ for a perfect fluid, and is equal to its pressure divided by its energy density”.
“Relativistic matter has w = 1⁄3. ΛCDM models dark energy as a cosmological constant, which corresponds to w = –1”.
“Our current best estimates of w are consistent with ΛCDM: putting it to within about 10% of –1, but the other models cannot yet be excluded”.
“If dark energy is better modelled by a value of w between –1 and –1/3, then expansion won’t become exponential, but will still continue to accelerate, leading to roughly similar results — in particular that only a finite number of galaxies are ever affectable”.
“If w were below –1, then the scale factor would grow faster than an exponential. (...) Furthermore, the scale factor would reach infinity in a finite time, meaning that by a particular year the proper distance between any pair of particles would become infinite. Presumably this moment would mark the end of time. This scenario is known as the ‘Big Rip’”.
“If w were between –1/3 and 0, then the scale factor would merely grow sub-linearly, making it easier to travel between distant galaxies and removing the finite limit on the number of reachable galaxies”.
Based on the 3rd point, one may naively say w follows a uniform distribution between −1.1 and −0.9. Consequently, there is a 50 % chance of w being:
Lower than −1, leading to a Big Rip. I think this only means the size of the universe tends to infinity, not that it actually reaches infinity, as I do not expect physical laws to generalise until infinity (which would also be impossible to test, as infinities are indistinguishable from very large numbers from an experimental point of view, given the limited range of all measurements).
Between −1 and −1/3, being compatible with ΛCDM. This would mean the affectable universe is finite.
Ya, I think the part of the universe we can causally affect is very likely bounded/finite, but that could be wrong, e.g. the models could be wrong. Furthermore, the whole universe (including the parts we very probably can’t causally affect) seems fairly likely to be infinite/unbounded, and we can possibly affect parts of the universe acausally, e.g. evidential cooperation or via correlated agents out there, and I actually think this is quite likely (maybe more likely than not). There are also different normative ways of interpreting the many worlds interpretation of QM that could give you infinities.
Someone who bites the bullet on risk-neutral EV maximizing total utilitarianism should wager in favour of acts with infinite impacts, no matter how unlikely, e.g. even if it requires our understanding of physics to be wrong.
The models are certainly wrong to some extent, but that does not mean we should assign a non-null probability to the universe being infinite. I think we can conceive of many impossibilities. For example, I can imagine 1 = 0 being true, or both A > B and A < B being true, but these relations are still false.
It is also impossible to show that 1 = 0 is false. Likewise, it is impossible to show the universe in infite, because infinities are not measurable (because all measurement have a finite range). So there is a sense in which the universe being finite is similar to the axioms of math.
To clarify, I think the universe is finite, but unbounded, i.e. that it has a finite size, but no edges/boundaries.
How much of these is still relevant if one puts null weight on evidential decision theory (EDT)?
Unless causal expectational total hedonistic utilitarianism in a finite affectable universe is true, which I think is the case.
I don’t think you can (non-dogmatically) justify assigning 0 probability to any of these claims, which you need to do to justifiably prevent possible infinities from dominating. That seems way too overconfident. An infinite universe (temporally or spatially) is not a logical impossibility. Nor is acausal influence.
Some considerations:
The analogy with math isn’t enough, and the argument also cuts both ways: you can never prove with certainty that the universe is finite, either. And you should just be skeptical that a loose analogy with math could justify 100% confidence in the claim that the universe is finite, if that’s what you intended.
You may be able to gather indirect evidence (although not decisive proof) for the universe being infinite, like we do for other phenomena, like black holes, dark matter and dark energy. For example, the flatter the universe seems to be globally, I think the more likely it is to be infinite (although even a flat universe could be finite).
Multiple smart people knowledgeable on this topic have thought much more about the issues than you (or me) and have concluded in favour of infinities. Giving their views any weight means assigning nonzero probability to such infinities. Not giving their views any weight would seem arrogant. (Of course, we should also give “only finite impacts” positive weight, but that gets dominated by the infinite possibilities under your risk neutral expected value maximizing total utilitarianism.) See also https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty
If you could provide a persuasive argument against these infinities that non-dogmatically allows us to dismiss them with 100% certainty, that would be a huge achievement. Since no one seems to have done this so far (or everyone who disagrees after hearing the argument failed to understand it or was so biased they couldn’t agree, which seems unlikely, or the argument hasn’t been read by others), it’s probably very hard to do, so you should be skeptical of any argument claiming to do so, including any you make yourself.
I would say infinity is a logical impossibility. During this thread, I was mostly arguing from intuition. Now that I think about it, my intuition was probably being informed by this episose of the Clearer Thinking Podcast with Joscha Bach, who is also sceptical of infinities.
Meanwhile, I have just found The Case Against Infinity from Kip Sewell. I have read the Introduction, and it really seems to be arguing for something similar to my (quite uninformed) view. Here are the 1st and last paragraphs:
Not sure whether I will understand it, but I will certainly have a go at reading the rest!
This seems to be arguing against standard mathematics. Even if you thought mathematical (not just physical) infinity was probably a logical impossibility, assigning 100% to its impossibility means dismissing the views of the vast majority of mathematicians, which seems epistemically arrogant.
If the author found a formal contradiction in the standard axioms of set theory (due to the axiom of infinity) or another standard use of infinity, that would falsify the foundations of mathematics, they would become famous, and mathematicians would be freaking out. It would be like solving P vs NP. Instead, the paper is 14 years old, not published in any academic journal, and almost no one is talking about it. So, the author very probably hasn’t found anything as strong as a formal contradiction. The notion of ‘absurdity’ they’re using could be informal (possibly like the way we use ‘paradox’, but many paradoxes have resolutions and aren’t genuine contradictions) and could just reflect their own subjective intuitions and possibly biases. Or, they’ve made a deductive error. Or, most charitably, they’ve introduced their own (probably controversial) premises, but to arrive at 100% confidence in the impossibility of infinity, they would need 100% confidence in some of their own premises. I’m not sure the author themself would even go that far, since that would be epistemically arrogant.
EDIT: I may have been uncareful switching between arguments. The main claim I want to defend is that infinities and infinite impacts can’t justifiably be assigned 0% probability. I do think some infinities are pretty likely and that infinity is very probably logically possible/coherent, but those are stronger claims than I need to justify not assigning 0% probability to infinite impact. Pointing out arguments for those positions supports the claim that 0% probability to infinite impacts is too strong, even if those arguments turn out to be wrong.
EDIT2: Maybe I’ve misunderstood and they don’t mean infinity is logically impossible even in mathematics, just only physically. Still, I think they’re probably wrong, and that’s not the main point here anyway: whatever argument they give wouldn’t justify assigning 0 probability to infinities and infinite impacts.
(I don’t think I will engage further with this thread.)
Yes and no:
Kip argues:
However:
I think the crux of the disagreement is described here (emphasis added by me):
In other words:
Regarding:
Kip rejects the existence of infinities in both physics and math. The real world does not allow for contradiction, so infinities have to be rejected in physics. In math, it can exist, but Kip argues that it is better to revise it to the extent math is supposed to decribe the real world (see quotations above).
Bach makes a basic error or assumption that’s widely rejected in math:
That there is any set of all sets. The notion is contradictory for more basic reasons like Russell’s paradox, so we use the “class of all sets” and define/construct sets so that there is no set of all sets. Proper classes are treated pretty differently from sets in many cases. Classes are collections of sets only. People don’t use the class of all sets to represent anything in the physical world, either, and I’d say that it probably can’t be used to represent anything physical, but that’s not a problem for infinities in general. There’s no class of all classes under standard set theory, since that would need to contain proper classes.
Even if we used the class of all sets to try to fix the argument, the power set operation has no natural extension to it in standard set theory. It would have to be the class of all subclasses of the class of all sets, which doesn’t exist under standard set theory because it would contain proper classes, but even if it did exist, that object would be different from the class of all sets, so there need not be any contradiction with them having different sizes. (I’d guess the class of all subclasses of the class of all sets would be strictly bigger by the same argument that the power set of a set is bigger than the set, under some set theory where that’s defined naturally and extends standard set theory.)
See this page for definitions and some discussion: https://en.wikipedia.org/wiki/Class_(set_theory)
Sewell assumes subtraction with infinite cardinals should be well-defined like it is for finite numbers without (good) argument, but this is widely rejected. Also, there are ways to represent infinities so that the specific operations discussed are well-defined, e.g. representing the objects as sets and using set operations (unions, differences, partitioning) instead of arithmetic operations on numbers (addition, subtraction, division). N—N = 0 this way and N—N has no other value, where “-” means set difference and N is the set of natural numbers. Subtracting the even numbers (or odd numbers) from the natural numbers would be represented differently on the left-hand side, so that giving a different result isn’t a problem. EDIT: I think he quotes some similar arguments, but doesn’t really respond to them (or probably doesn’t respond well).
They seem to be arguing against strawmen. They don’t seem to understand the basics of standard axiomatic set theory well enough, and they wouldn’t make such bad arguments if they did. I would recommend you study axiomatic set theory if you’re still tempted to dismiss the logical possibility of infinity, or just accept that it’s likely to be logically possible by deferring to those who understand axiomatic set theory, because probably almost all of them accept its logical possibility.
(Again, I don’t intend to engage further, but I guess I’m bad at keeping that kind of promise.)
Long story short, Sewell:
Understands the notion of infinity does not lead to contradictions in math. As you noted, infinity is one of the axioms of ZMC set theory, which is widely followed in math. So no wonder infinity is true (by definition) for most mathematicians!
Argues that math should be about the real world, so we should not be defining ad hoc rules which have no parallell in physical reality.
As an analogy (adapted from one used by William Craig), we can suppose I have 2 bags with infinite marbles. One contains marbles numbered with the even numbers, and the other marbles numbered with the odd numbers, so they have the same infinity of marbles. If I:
Give both bags to you, I will keep no bags, and therefore will have zero marbles. So inf—inf = 0.
Give 1 bag to you, I will keep 1 bag, and therefore will have infinite marbles. So inf—inf = inf.
This leads to 0 = inf, which is contradictory.
I appreciate one can say I have cheated by:
Using the same type of subtraction in both situations (indicated by “-”), whereas I should have used different symbols to describe the different types of subtractions.
Assuming I could perform the operations inf—inf, which is an indeterminate form.
However, as far as I can tell, reality only allows for one type of subtraction. If I have 3 apples in my hands (or x $ in a Swiss bank account ;)), and give you 2 apple, I will keep 1 apple. This is the motivation for 3 − 2 = 1.
In Sewell’s words:
“In classical mathematics the operation of subtraction on natural numbers yields definite answers, and so instances of subtraction can be grounded in real world examples of removal. The act of “removing” a subset of objects from a set of objects is just an instance of applying mathematical subtraction or division to physical collections in the real world”.
“There is nothing in transfinite mathematics implying that mathematical operations on infinite sets cannot be applied to logically possible infinite collections in the real world. So, if we are able to consistently subtract or divide infinite sets in transfinite mathematics, we should then without contradiction be able to carry out the removal of infinite subsets from infinite sets of real objects as well. Subtracting and dividing infinite sets should show what would happen in the real world if we could go about “removing” infinite subsets from infinite sets of physical objects. On the other hand, if we would get mathematical nonsense by performing inverse operations in transfinite mathematics, then we would also get logical nonsense when trying to “remove” an infinite subset of real objects from an infinite set of them. Such a removal would then not be able to be performed in the real world, which does not permit logically contradictory states of affairs to occur. The application of inverse operations in transfinite mathematics to real world instances of removing infinite subsets then, is actually a test of the logical validity of infinite sets. If the math breaks down as we’ve seen, so does the logic of infinite sets in the real world”.
My reply here has some further context.
Sure, I trust your decisions regarding your time. Thanks for the discussion!
“On the other hand, if we would get mathematical nonsense by performing inverse operations in transfinite mathematics, then we would also get logical nonsense when trying to “remove” an infinite subset of real objects from an infinite set of them.”
This doesn’t follow and is false. The set difference operation is well-defined, so the result is not logical nonsense. The corresponding set cardinalities after a specific set difference will also be well-defined, since the cardinality function is also well-defined.
Plenty of apparently real things aren’t well-defined unless you specify them in enough detail, but that doesn’t make them nonsense. For example, the weight of a bag after removing an object whose weight is unknown. Or, the center of mass of two objects, knowing only their respective centers of mass (and distance between them).
There’s also no logical necessity for subtraction with infinite numbers to be well-defined, and it seems conceivable without logical contradiction that it’s not, even in the actual universe (e.g. if we model an infinite universe or the continuum using ZF(C) set theory for the infinities). It’s of course possible our universe has no infinities and arithmetic is always well-defined when representing any real objects in it, but there’s no decisive proof for either, and hence no decisive proof for the impossibility of infinity. It doesn’t follow by necessity from the finite case.
In general, nothing can be proved to be logically true or false without assuming some claims are true. For instance, in order to show that a given mathematical hypothesis is true or false, one has to define some axioms. As an example, transitivity (if A is better than B, and B is better than C, then A is better than C) is usually assumed to be one of the axioms of rationality. Transitivity cannot be proved (without defining any axioms), it is true by definition, and I have no way to convince someone who argues that transitivity is false.
If the concept of infinity could be true, the whole would not always be the sum of its parts (e.g. inf/2 = inf). However, the whole always being the sum of its parts is axiomatically true to me, so I consider the concept of infinity to be false. Similarly to transitivity, I have no way to prove my axiom that the whole always is the sum of its parts.
For what is worth, I see expectational total hedonistic utilitarianism (ETHU) as the axiom of ethics/morality. On the one hand, it is impossible for anyone to prove it is true. For example, although I think the more likely a certain positive outcome is, the better, I have no way to prove one should maximise expected value. On the other hand, ETHU being true feels the same way to me as transitivity being true.
To clarify the contradiction I mentioned above, if n denotes the cardinality operator, v the disjunction operator, ^ the conjunction operator, O the set of odd numbers, E the set of even numbers, ES the empty set, n(ES) = 0, and n(O) = n(E) = inf:
If I give both bags to you, I will keep no bags, and therefore will have zero marbles:
A1: n((O v E)\(O v E)) = n(O v E) - n((O v E) ^ (O v E)) = n(O v E) - n(O v E) = inf—inf.
B1: n((O v E)\(O v E)) = n(ES) = 0.
C1: A1^ B1 ⇒ inf—inf = 0.
If I give 1 bag to you, I will keep 1 bag, and therefore will have infinite marbles:
A2: n((O v E)\O) = n(O v E) - n((O v E) ^ O) = n(O v E) - n(O) = inf—inf.
B2: n((O v E)\O) = n((O v E)\E) = n(O) = inf.
C2: A2 ^ B2 ⇒ inf—inf = inf.
So there is a contradiction:
D: C1 ^ C2 ⇒ 0 = inf.
Since, 0 = inf is false, one of the following is false:
The relationship R ⇔ n(X\Y) = n(X) - n(X ^ Y), which I used above, exists in the real world.
Infinites exist in the real world.
I guess you would be inclined towards putting non-null weight into each one of these points being false. However, R essentially means the whole is the sum of its parts, which I cannot see being false in the real world. So I reject the existence of infinites in the real world.
I have now finished reading The Case Against Infinity, and really liked it! I think this paragraph summarises it well:
On October 25th, 2020, Hilary Greaves gave a talk on ‘Cluelessness in effective altruism’ at the EA Student Summit 2020. I found the talk so valuable that I wanted to transcribe it.
I made the transcript with the help of http://trint.com/, an AI speech-to-text platform which I highly recommend. Thank you to Julia Karbing for help with editing.
Thanks for linking trint.com—I hadn’t heard of it before. Have you tried otter.ai though? I think it could be as good as trint, and Otter is cheaper compared to Trint. They even have a free version that works quite well.
Thanks I’ll check it out!
I basically agree with the claims and conclusions here, but I think about this kind of differently.
I don’t know whether donating to AMF makes the world better or worse. But this doesn’t seem very important, because I don’t think that AMF is a particularly plausible candidate for the best way to improve the long term future anyway—it would be a reasonably surprising coincidence if the top recommended way to improve human lives right now was also the most leveraged way to improve the long term future.
So our attitude should be more like “I don’t know if AMF is good or bad, but it’s probably not nearly as impactful as the best things I’ll be able to find, and I have limited time to evaluate giving opportunities, so I should allocate my time elsewhere”, rather than “I can’t tell if AMF is good or bad, so I’ll think about longtermist giving opportunities instead.”
Do you agree with the decision-making frame I offered here, or are you suggesting doing something different from that?
What’s your distribution for the value of donating to AMF?
What do you mean by allocate your time “elsewhere”?
My guess is that Buck means something like: “spend my time to identify and execute ‘longtermist’ interventions, i.e. ones explicitly designed to be best from the perspective of improving the long-term future—rather than spending the time to figure out whether donating to AMF is net good or net bad”.
This is indeed what I meant, thanks.
How does this differ from response 5 in the post?
(My thanks to the post authors, velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).
I gave this a decade review downvote and wanted to set out why.
Reinventing the wheel
I think this is on the whole a decent talk that sets out an personal individual’s journey through EA and working out how they can do the most good.
However I think the talk involves some amount of “reinventing the wheel” (ignoring and attempting to duplicate existing research).
In the talk Hilary raises the problem of clueless and discusses five possible solutions to this problem. The problem (at least as it is defined in this talk here) appears to relate to having confidence in decisions made under situations of uncertainty, where there are hard/impossible to measure factors.
Now the rough topic of how to make decisions under uncertainty (uncertainty about options, probabilities, values, unknown unknowns, etc) is a topic that military planners, risk managers, academics and others have been researching for decades. And they have a host of solutions: anti-fragility, robust-decision making, assumption based planning, sequence thinking, adaptive planning. And they have views on when to make such decisions, when to do more research, how to respond, and how confident to be, etc.
Hilary does not reference any of that work or flag it to the reader at any point in her talk. I honestly think any thorough analysis of the options for addressing uncertainty/cluelessness really should be drawing on some of that existing literature.
Does this matter?
Normally this should not be a big deal, EA authors reinvent the wheel all the time (this survey suggests it is EA’s No1 top flaw) so avoiding this is a very high bar to hold an author/speaker too. However I think in this specific instance it appears to have sown confusion and and been harmful to EA discussions of this topic. It has been my impression EA readers are very aware of practical decision making challenges of cluessness but very unaware of the research and solutions.
Ultimately this is very subjective claim. Some additional supporting evidence might be things like:
Talking to people who work in longtermist research in multiple EA organisations have expressed similar views and concerns.
There are many anecdotal cases of EA’s discussion cluelessness but not the solutions. (Even in the comments below Pablo says “In your follow-up comment, you say that the problem ‘has reasonable solutions’, though I am personally not aware of any such solution”).
Searches of the site show 327 pages on the EA Forum that mention “cluelessness”. Compared to 21 for robust decision making. 37 for sequence thinking. 82 for Knightian uncertainty. 166 for “Deep Uncertainty”. Etc
Suggested follow up.
One interesting solution might that whenever referring to practical decision making challenges, the term “clulessness” (which appears to be a niche philosophical term) could be replaced with terms more common in the decision making literature, such as “deep uncertainty” or “knightian uncertainty”; for example on the EA wiki or in future posts.
NOTE: This review has been edited to reflect comments below. Will post the initial review below as well for prosperity. See here.
The term “cluelessness” has been used in the philosophical literature for decades, to refer to the specific and well-defined problem faced by consequentialism and other moral theories which take future consequences into account. Greaves’s talk is a contribution to that literature. She wasn’t even the first to use the term in EA contexts; I believe Amanda Askell and probably other EAs were discussing cluelessness years before this talk.
Yes you are correct. I am not an expert here but my best guess is the story is something like
“Moral cluelessness” was a philosophical term that has been around for a while.
Hilary borrow the philosophy term and extended it to discuss “complex clulessness” (which a quick Google makes me think is a term she invented).
“complex cluelessness” is essentially identical to “deep uncertainty” and such concepts (at least as far as I can tell from reading her work, I think it was this paper I read) .
This and other articles then shorthanded “complex cluelessness” to just “cluelessness”.
I am not sure exactly, happy to be corrected. So maybe not an invented term but maybe a borrowed, slightly changed and then rephrased term. Or something like that. It all gets a bit confusing.
And sorry for picking on this talk if Hilary was just borrowing ideas from others, just saw it on the Decade Review list.
– –
Either way I don’t think this changes the point of my review. It is of course totally fine to invent / reinvent / borrow terminology, (in fact in academic philosophy it is almost a requirement as far as I can tell). And it is of course fine for philosophers to talk like philosophers. I just think sometimes adding new jargon to the EA space can cause more confusion than clarity, and this has been one of those times. I think in this case it would have been much better if EA had got into the habit of using the more common widely used terminology that is more applicable to this topic (this specific topic is not, as far as I can tell, a problem where philosophy has done the bulk of the work to date).
And insofar as the decade review is about reviewing what has been useful 1+ years later I would say this is a nice post that has in actuality turned out unfortunately to be dis-useful / net harmful. Not trying to place blame. Maybe there is just a lesson for all of us on being cautious on introducing terminology.
A few thoughts:
I’m open to the possibility that there are terms better than “cluelessness” to refer to the problem Hilary discusses in her talk. Perhaps we could continue this discussion elsewhere, such as on the ‘talk’ page of the cluelessness Wiki entry (note that the entry is currently just a stub)?
As noted, the term has been used in philosophy for quite some time. So if equivalent or related expressions exist in other disciplines, the question is, “Which of these terms should we settle for?” Whereas you make it seem like using “cluelessness” requires a special justification, relative to the other choices.
Since Hilary didn’t introduce the term, either in philosophy or in EA, it seems inappropriate to evaluate her talk negatively, even granting that it would have been desirable if a term other than “cluelessness” had become established.
Separately, I think Hilary’s talk is a valuable contribution to the problem, so I don’t think it warrants a negative evaluation. (But maybe you disagree and your views about the substance of the talk also influenced your assessment? In your follow-up comment, you say that the problem “has reasonable solutions”, though I am personally not aware of any such solution.)
The EA Forum wiki has talk pages!! Wow you learn something new every day :-)
Yes I think that is ultimately the thing we disagree on. And perhaps it is one of those subjective things that we will always disagree on (e.g. maybe different life experiences means you read some content as new and exciting and I read the same thing as old and repetitive).
If I had to condense why I didn’t think it is a valuable contribution is it looks to me (given my background) that it is reinventing the wheel.
The rough topic of how to make decisions under uncertainty about the impact of those decisions (uncertainty about what the options are, what the probabilities are, how to decide, what is even valuable ect) in the face of unknown unknowns, etc – is a topic that military planners, risk managers, academics and others have been researching for decades. And they have a host of solutions: anti-fragility, robust-decision making, assumption based planning, sequence thinking, adaptive planning. And they have views on when to make such decisions, when to do more research, how to respond, etc.
I think any thorough analysis of the options for addressing uncertainty/cluelessness really should draw on some of that literature (before dismissing options like “make bolder estimates” / “make the analysis more sophisticated”) . Otherwise it would be like trying to reinvent the wheel, suggesting it should be square and then concluding it cannot be done and wheels don’t work.
Hope that explains where I am coming from.
(PS. To reiterate, in Hilary’s defense, EAs reinvent wheels all the time. No1 top flaw and all that. I just think this specific case has lead to lots of confusion. Eg people thinking there is no good research into uncertainty management)
Thanks for the reply. Although this doesn’t resolve our disagreement, it helps to clarify it.
Thank you Pablo. Have edited my review. Hopefully it is fairer and more clear now. Thank you for the helpful feedback!!
Just to build on what Pablo has been saying, the term “cluelessness” goes back to at least 2000 where James Lenman used it specifically as an argument against consequentialism. Hilary in her 2016 paper was responding specifically to Lenman’s critique, so it seems fair that she used the term cluelessness there, and in this particular talk. She was indeed the first person to draw a distinction between “simple” and “complex” cluelessness.
By the way, in that paper Hilary has a footnote saying:
So the term may go all the way back to Smart in 1973 but I can’t be certain as I can’t access the specific text cited.
Regarding other terms such as Knightian Uncertainty. I’m far from sure about all this, but Knightian uncertainty seems something that we can work around and account for within a particular ethical framework (say consequentialism) through various tools—as you imply. However, cluelessness is an argument against ethical frameworks themselves, including consequentialism. In this case these seem very different concepts that rightly are referred to differently. (EDIT: although admittedly cluelessness has become something we are trying to work around within a consequentialist framework so you’re not entirely wrong...).
Hi Jack, lovely to get your input.
Sure, “cluelessness” is a long standing philosophical term that is “an argument against ethical frameworks themselves, including consequentialism”. Very happy to accept that.
But that doesn’t seem to be the case here in this talk. Hilary says “how confident should we be really that the cost-effectiveness analysis we’ve got is any decent guide at all to how we should be spending our money? That’s the worry that I call ‘cluelessness’”. This seems to be a practical decision making problem.
Which is why it looks like to me that a term has been borrowed from philosophy, and used in another context. (And even if it was never the intent to do so it seems to me that people in EA took the term to be used as pointing to the practical decision making challenges of making decisions under uncertainty.)
Borrowing terms happens all the time but unfortunately in this case it appears to have caused some confusion along the way. It would have been simpler to use the keep the philosophy term in the philosophy box to talk about topics such as the limits of knowledge and so on, and to use one of the terms from decision making (like “deep uncertainty”) to talk about practical issues like making decisions about where to donate given the things we don’t know, and kept everything nice and simple.
But also it is not really a big deal. Kind of confusing / pet peeve level, but no-one uses the right words all the time, I certainly don’t. (If there is a thing this post does badly it is the reinventing the wheel point, see my response to Pablo above, and the word choice is a part of that broader confusion about how to approach uncertainty).
Thank you Pablo. Have edited my review. Hopefully it is fairer and more clear now. Thank you for the helpful feedback!!
To be a bit more concrete, I spend my time talking to politicians, policy makers, risk mangers, climate scientists, military strategists, activists. I think most of these people would understand “deep uncertainty” and “wicked problem” but less so “cluelessness”. I think they would mean the same thing by this term as this post means by “cluelessness”. I think the fact that “cluelessness” became the popular term in EA has made things a bit more challenging for me.
I recognise that expecting people to police their language against the possibility some term they introduce their audience to is suboptimal is a high bar. Philosophers use philosophy language and that is obviously fine. I just wish “cluelessness” hadn’t been the term that seemed to stick in EA and that one of these other words had been used (and also I think that the talk could have benefited from recognising that this is an issue that gets attention and has reasonable solutions outside of philosophy).
My understanding is that “complex cluelessness” is not essentially identical to”deep uncertainty”, although “deep uncertainty” could mean a few things and I’m not sure exactly what you have in mind.
My understanding is also that the term is not essentially identical to “uncertainty” “Knightian uncertainty” “wicked problems” “extreme model uncertainty” or “fragile credences”
I do however think that EAs often use the term “cluelessness” incorrectly in a way that makes it more similar to these other terms. I think this is because cluelessness is a confusing topic to wrap ones head around correctly.
Hmmm … I am not sure what it means that EAs use the term “cluelessness” incorrectly. I honestly never hear the term used outside of EA. So have been I assuming the way EAs use it is the only way (and hence correct), so maybe I have been using it incorrectly.
Would love some more clarity if you have time to provide it!
As far as I can tell
“complex clulessness” as defined by Hilary here just seems to be one specific form of (or maybe specific way of rephrasing) deep uncertainty , so a subcategory of “Knightian uncertainty” as defined by Wikipedia or “Deep uncertainty” as defined here.
“clulessness” as it is most commonly used by EAs seems to be the same as “Knightian uncertainty” as defined by Wikipedia or “deep uncertainty” as defined here.
Is that correct?
My initial review was as follows:
(My thanks to the post authors, velutvulpes and juliakarbing, for transcribing and adding a talk to the EA Forum, comments below refer to the contents of the talk).
I gave this a decade review downvote and wanted to set out why.
I think this is on the whole a decent talk that sets out an personal individual’s journey through EA and working out how they can do the most good.
It does however do a little bit of reinventing of the wheel. Now EAs across the board can, I think fairly, be criticised for reinventing the wheel. In fact this survey of 40 EA leaders found that “The biggest concern and a trend that came up again and again was that EAs tend to reinvent the wheel a lot”.
In this talk (and other work) the author defines and introduces the idea of “cluelessness”. This serves a purpose but it is done without any mention of the myriad of existing terminologies that essentially mean the same thing, such as “uncertainty” “deep uncertainty” “Knightian uncertainty” “wicked problems” “extreme model uncertainty” “fragile credences” etc. The author then suggests 5 responses to cluelessness without mentioning the decades of research that have gone into the above topics and the existing ways humans deal with these issues.
Ultimately this should not a big deal. We all invent terminology from time to time, or borrow from domains we are familiar with to explain what is on our mind. It is not a big sin and can normally be shrugged off.
Unfortunately this author has had the bad luck that her new terminology stuck. And it stuck pretty hard. There is a “cluelessness” tag on the EA wiki and over 450 pages on the EA Forum that mention “cluelessness”. Reflecting back, and talking to other EAs a year later, I think this [edit: invented] term may have been harmful for EA discourse. I expect it has lead to people being unaware of the troves of academic (and other) work done to date on managing high levels of uncertainty and managing risks to confusion and ongoing wheel reinventing.
Suggested follow up (if any) might be things like replacing the “clulessness” wiki page with another term and for people to stop using the term as much as possible.
On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too.
On the other, the argument for near-term evidence-based interventions like AMF is what got me (and apparently, the speaker) into EA in the first place. It’s definitely a much easier pitch to friends and family, compared to this really weird meta cause whose impact at the end of the day I still don’t really understand. To me, the ability to explain a concept to a layperson serves as a litmus test to how well I understand the concept myself.
Maybe I’ll stay on this side of the kiddy pool, encouraging spectators to dip their toes in and see what the water is like, while the more epistemologically intrepid go off and navigate the deep oceans...
But if, as this talk suggests, it’s not obvious whether donating to near term interventions is good or bad for the world, why are you interested in whether you can pitch friends and family to donate to them?
My rough framing of “why pitch friends and family on donating” is that donating is a credible commitment towards altruism. It’s really easy to get people to say “yeah, helping people is a good idea” but really hard to turn that into something actionable.
Even granting that the long term and thus actual impact of AMF is uncertain, I feel like the transition from “typical altruistic leaning person” to “EA giver” is much more feasible, and sets up “EA giver” to “Longtermist”. Once someone is already donating 10% of their income to one effective charity, it seems easier to make a case like the one OP outlined here.
I guess one thing that would change my mind: do you know people who did jump straight into longtermism?
I totally understand this motivation and I’m currently doing the same.
I’m a little worried that it’s hard to do this with integrity though. Maybe if you are careful with what you say (e.g. “Cheapest way to save a life” rather than “Most effective way to do good”) you can get away without lying, but if you really believe the arguments in the talk it still starts to feel like dangerous territory to me.
I’m reposting this comment from my own post, since in case anyone finds it relevant, here.
In a sense, I agree with many of Greaves’ premises but none of her conclusions. I do think we ought to be doing more modeling, because there are some things that are actually possible to model reasonably accurately (and other things not) (a mixture of Response 1 and 3).
Greaves says an argument for longtermism is, “I don’t know what the effect is of increasing population size on economic growth.” But we do! There are times when it increases economic growth, and there are times when it decreases it. There are very well-thought-out macro models of this, but in general, I think we ought to be in favor of increasing population growth.
She also says, “I don’t know what the effect [of population growth] is on tendencies towards peace and cooperation versus conflict.” But a similar thing to say would be, “Don’t invent the plow or modern agriculture, because we don’t know whether they’ll get into a fight once villages have grown big enough.”
Her argument distresses me so much, because it seems that the pivotal point is that we can no longer agree that saving lives is good, but rather only that extinction is bad. If we can no longer agree that saving lives is good, I really don’t know what we can agree upon.
There is also improving lives.
What’s the decision theory here?
Consider a two-action, two-period model: we know the effect of action A1 in t1, but not in t2; but we know effect of A2 in both periods. Is the suggestion to do A2 (rather than A1) because we have more information on the effect of A2?
Isn’t Response 5 (go longtermist) really a subset of Response 4 (Ignore things that we can’t even estimate)? It proposes to ignore shorttermist interventions, because we can’t estimate their effects.
It’s not ignoring them, it’s selecting interventions which look more robustly good, about which we aren’t so clueless.
Is that idea that once these longtermist interventions are fully-funded (diminishing returns), then we start looking at shortterm interventions?
I think the claim is that we don’t know that any short-termist interventions are good in expectation, because of cluelessness.
For what it’s worth, I don’t agree with this claim; this depends on your specific beliefs about the long-term effects of interventions.
Also, this seems like a bad decision theory. I can’t estimate the longterm effects of eating an apple, but that doesn’t imply that I should starve due to indecision.
Longtermism wouldn’t say you should die, just that, unless you know more, it wouldn’t say that you shouldn’t die either.
You can’t work on longtermist interventions if you die, though, and doing so might be robustly better than dying.
Is this longtermism?
List all possible actions {A1,..,AK}.
For each action Aj, calculate expected value Vt(Aj) over t=1:∞, using the social welfare function.
If we can’t calculate Vt for some t, due to cluelessness, then skip over that action.
Out of the remaining actions, choose the action with the highest expected value.
Or, (3′): if we can’t calculate Vt for Ai and Aj, then assume that they’re equal, and rank them by using their expected value over periods before t.
So longtermism is not a general decision theory, and is only meant to be applied narrowly?
Longtermism is the claim (or thesis) that we can do the most good by focusing on effects going into the longterm future:
https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism/
A summary of this talk is available here.
In the same way that an organism tries to extend the envelope of its homeostasis, an organization has a tendency to isolate itself from falsifiability in its core justifying claims. Beware those whose response to failure is to scale up.
What is this referring to?