Research analyst at Open Philanthropy. All opinions are my own.
Lukas Finnveden
However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.
I’d say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don’t see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can’t change what the observer-moment S(t) expects to happen without changing the state of it’s mind. If we change the state of it’s mind, it’s not a copy of S(t) anymore, and the argument fails.
In general, I can’t see how this plan would work. As you say, you can’t decrease the absolute number of suffering oberver-moments, so it won’t do any good from the perspective of total utilitarianism. The closest thing I can imagine is to “dilute” pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.
I remain unconvinced, probably because I mostly care about observer-moments, and don’t really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can’t quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under ‘Assumptions’.
Seems like there’s still self-selection going on, depending on how much you think ‘a lot’ is, and how good you are at finding everyone who have thought about it that much. You might be missing out on people who thought about it for, say, 20 hours, decided it wasn’t important, and moved on to other cause areas without writing up their thoughts.
On the other hand, it seems like people are worried about and interested in talking about AGI happening in 20 or 30 or 50 years time, so it doesn’t seem likely that everyone who thinks 10-year timelines are <10% stops talking about it.
Of course, a deep ecologist who sided with extinction would be hoping for a horrendously narrow event, between ‘one which ends all human life’ and ‘one which ends all life’. They’d still have to work against the latter, which covers the artificial x-risks.
I agree that it covers AI, but I’m not sure about the other artificial x-risks. Nuclear winter severe enough to eventually kill all humans would definitely kill all large animals, but some smaller forms of life would survive. And while bio-risk could vary a lot in how many species were susceptible to it, I don’t think anyone could construct a pathogen that affects everything.
I definitely except that there are people who will lose out on happiness from donating.
Making it a bit more complicated, though, and moving out of the area where it’s easy to do research, there are probably happiness benefits of stuff like ‘being in a community’ and ‘living with purpose’. Giving 10 % per year and adopting the role ‘earning to give’, for example, might enable you to associate life-saving with every hour you spend on your job, which could be pretty positive (I think that feeling that your job is meaningful is associated with happiness). My intuition is that the difference between 10 % and 1 % could be important to be able to adopt this identity, but I might be wrong. And a lot of the gains from high incomes probably comes from increased status, which donating money is a way to get.
I’d be surprised if donating lots of money was the optimal thing to do if you wanted to maximise your own happiness. But I don’t think there’s a clear case that it’s worse than the average person’s spending.
To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability.
While I’m not entirely convinced of a fast take-off, this particular argument isn’t obvious to me. If the AI is better than humans at every cognitive task, then for every ability that we care about X, it will be better at the cognitive task of improving X. Additionally, it will be better at the cognitive task of improving it’s ability to improve X, etc. It will be better than humans at constructing an AI that is good at every cognitive task, and will thus be able to create one better than itself.
This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.
This doesn’t seem very unlikely to me. As a proof-of-concept, consider a paper-clip maximiser able to simulate several clever humans at high speeds. If it was posed a moral dilemma (and was motivated to answer it) it could perform at above human-level by simulating humans at fast speeds (in a suitable situation where they are likely to produce an honest answer to the question), and directly report their output. However, it wouldn’t have to be motivated by it.
Reports we’ve heard indicate that extrusion capacity is currently the limiting factor driving up costs for plant-based alternatives in the United States. As a result, we’d only want to pursue this path if we have strong reason to believe that our plant-based alternative was not displacing a better plant-based alternative in the market.
What’s the connection between extrusion capacity and not displacing better alternatives?
Since the post is very long, and since a lot of readers are likely to be familiar with some arguments already, I think a table of contents in the beginning would be very valuable. I sure would like one.
I see that it’s already possible to link to individual sections (like https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/#a-note-on-disvalue-focus) so I don’t think this would be too hard to add?
Carl’s comment renders this irrelevant for CEA lotteries, but I think this reasoning is wrong even for the type of lotteries you imagine.
In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there’s increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.
What you’re forgetting is that in the 20 % of worlds where you get your donation, you’d rather have been in the pool without thoughtful people. If you were, you will get to regrant 50k smartly, and a thoughtful person will get to regrant 40k. However, if you were in the pool with thoughtful people, the thoughtful people won’t get to regrant any money, and the 40k in the thoughtless group will go to some thoughtless cause.
When joining a group (under your assumptions, that aren’t true for CEA), you increase the winnings of everyone while decreasing the probability that they win. In expectation, they all get to regrant the same amount of money. So the only situation where the decision between groups matter is if you have some very specific ideas about marginal utility, e.g. if you want to ensure that there exists at least one thoughtful lottery winner, and don’t care much about the second.
Wealth almost entirely belongs to the old. The median 60-year-old has 45 times (yes, forty-five times) the net worth of the median 30-year-old.
Hm, I think income might be a better measurement than wealth. I’m not sure what they count as wealth, since the link is broken, but a pretty large fraction of that may be due to the fact that 60-year-olds needs to own their house and their retirement savings. If the real reason that 30-year-old lack wealth is that they don’t need wealth, someone determined to give to charity might be able to gather money comparable to most 60-year-olds.
It’s 221 million neurons. Source: http://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html
You might be thinking about fruit flies, they have 250k
Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war
Do you really mean that P(nuclear war | climate change) is less than P(nuclear war)? Or is this supposed to say that the risk of nuclear war and climate change is less than the unconditional probability of nuclear war? Or something else?
Quantifying anthropic effects on the Fermi paradox
I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions)
I definitely agree about some numbers. Maybe I should have been more explicit about this in the post, but I have low credence in the exact distribution of (as well as , , and ): it depends far too much on the absolute rate of planet formation and the speed at which civilisations travel.
However, I’m much more willing to believe that the average fraction of space that would be occupied by alien civilisations in our absence is somewhere between 30 % and 95 %, or so. A lot of the arbitrary assumptions that affects cancels out when running the simulation, and the remaining parameters affects the result surprisingly little. My main (known) uncertainties are
Whether it’s safe to assume that intergalactic colonisation is possible. From the perspective of total consequentialism, this is largely a pragmatic question about where we can have the most impact (which is affected by a lot of messy empirical questions).
How much the results would change if we allowed for a late increase in life more sudden than the one in Appendix C (either because of a sudden shift in planet formation or because of something like gamma ray bursts). Anthropics should affect our credence in this, as you point out, and the anthropic update would be quite large in favor. However, the prior probability of a very sudden increase seems small. That prior is very hard to quantify, and I think my simulation would be less reliable in the more extreme cases, so this possibility is quite hard to analyse.
Do you agree, or do you have other reasons to doubt the 30%-95% number?
This seems overall too pessimistic to me as a pre-anthropic prior for colonization
I agree that the mean is too pessimistic. The distribution is too optimistic about the impossibility of lower numbers, though, which is what matters after the anthropic update. I mostly just wanted a distribution that illustrated the idea about the late filter without having it ruin the rest of the analysis. has almost exactly the same distribution after updating, anyway, as long as assigns negligible probability to numbers below .
Good point, but this one has still received the most upvotes, if we assume that a negligible number of people downvoted it. At writing time, it has received 100 votes. According to https://ea.greaterwrong.com/archive, the only previous posts that received more than 100 points has less than 50 votes each. Insofar as I can tell, the second and third most voted-on posts are Empirical data on value drift at 75 and Effective altruism is a question at 68.
This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.
Hm, I thought one of the main worries was that major global powers wouldn’t have to agree, since any country would be able to launch a geoengineering program on their own, changing the climate for the whole planet.
Do you think that global governance is good enough to disincentivize lone states from launching a program, purely from fear of punishment? Or would it be possible to somehow reverse the effects?
Actually, would you even need to be a state to launch a program like this? I’m not sure how cheap it could become, or if it’d be possible to launch in secret.
As a problem with the ‘big list’, you mention
2. For every reader, such a list would include many paths that they can’t take.
But it seems like there’s another problem, closely related to this one: for every reader, the paths on such a list could have different orderings. If someone has a comparative advantage for a role, it doesn’t necessarily mean that they can’t aim for other roles: but it might mean that they should prefer the role that they have a comparative advantage for. This is especially true once we consider that most people don’t know exactly what they could do and what they’d be good at—instead, their personal lists contains a bunch of things they could aim for, ordered according to different probabilities of having different amounts of impact.
In particular, I think it’s a bad idea to take a ‘big list’, winnow away all the jobs that looks impossible, and then aim for whatever is on top of the list. Instead, your personal list might overlap with others’, but have a completely different ordering (yet hopefully contain a few items that other people haven’t even considered, given that 80k can’t evaluate all opportunities, like you say).
Fantastic work! Nitpicks:
The last paragraph is repeated in the second to last paragraph.
However, the beneficial effects of the cash transfer may be much lower in a UCT
Is this supposed to say “lower in a CCT”?
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give.
I think this is the article you’re thinking about, where they’re talking about the paths of marginal graduates. Note that it’s from 2015 (though at least Will said he still thought it seemed right in 2016) and explicitly labeled with “Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question”.
Neither the link in the text nor Chi’s links work for me. They both give 404. I can’t find the data when looking directly at Peter’s github either https://github.com/peterhurford/ea-data/tree/master/data/2018