Do you see any specific examples where reducing other types of existential risks could increase quality risks?
Alexander
The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs.
Then I guess you don’t think it’s plausible that we can’t expect to make many permanent gains.
Why?
I hope we don’t get carried away with the art thing-I was just trying to steelman that guy’s response.
My main point was just to solicit ideas about how to help first-world folks. That’s not because I think you can save more first-world folks than developing-world folks: it’s because I accept greater concern with socially nearby people in my definition of altruism. On this site you guys don’t-and I accept that too. But I now wonder if your definition of effectiveness is so different from mine that we can’t even talk.
I felt like this post just said that the person had some idiosyncratic reasons they did not like EA, so they left. Well, great, but I’m not sure how that helps anyone else.
Here’s a thought I think is more useful. For a long time I have been talking anonymously about politics online. Lately I think this is pointless because it’s too disconnected from anything I can accomplish. The tractability of these issues for me is too low. So to encourage myself to think more efficiently, and to think mainly about issues I can do something about, I’m cutting out all anonymous online talk about big social issues. In general, I’m going to keep anonymous communications to a minimum.
Does anyone know if futures markets for crude oil exist on more than a 10-year time frame?
Okay. Do you see any proxies (besides other people’s views) that, if they changed in our lifetime, might shift your estimates one way or the other?
The effect of most disasters decays over time, but this does not mean that a disaster so big it ends humanity is not possible. So I don’t see why that most societal changes decay over time bears on whether large trajectory changes could happen. Maybe someday, there will be a uniquely huge change.
Also, I don’t understand why Bostrom mentions a “thought” that all sufficiently good civilizations will converge toward an optimal track. This seems like speculation.
Here is a concern I have. It may be that reducing many types of existential risk, like of nuclear war, could lower economic growth.
How do we know that by avoiding war we are not increasing another sort of existential risk, or the risk of permanent economic stagnation? Depending on how much we want to risk a total nuclear war, human development on Earth might have many permanent equilibria.
I am wondering if someone can explain, or point me to a link on, why they think global poverty charity matters compared with policy. For example, one statistic from GWWC was that the Iraq war cost more than all government foreign aid from the developed world for 50 years, and I would guess that the war’s economic effects on Iraq were comparable to its costs. Also, African exports and imports are worth about $35 billion each but total US international charity (to all countries, not just Africa) was $19 billion in 2012, according to this source. This suggests to me that (from an EA standpoint) policies on trade alone are more important than charity.
Dylan Matthews:You talk about existential risks in your latest book — big threats that have a chance of wiping out all of humanity. Which of those, if you had to pick one or two, concerns you the most? Is there one where the story of how a disaster would unfold is particularly compelling?
Peter Singer: It’s not just that the disaster story is more compelling, but that there is a reasonably compelling story as to how we can reduce that risk. When it comes to collision with an asteroid, there is a reasonable story about how we could reduce that risk. First we need to discover whether asteroids are on a collision path, and NASA is already doing that, and then we would need to think about how we could deflect it from Earth. So that, I can kind of understand.
Some of the others, it’s hard to know exactly what we could do. Bioterrorism, I guess we can develop ways of making things more secure and making it harder for bioterrorists. But it’s not going to be easy to find exactly what the best strategy is. Things like the singularity — the takeover by artificial intelligence, or something like that — it’s very hard to see exactly, at this stage, anyway, what you could do that would reduce that risk. I don’t know.
Edit: I already doubted anyone here wanted to discuss non-cosmopolitan thoughts, so I just gave a link. The downvote suggests that only cosmopolitan ideas are tolerated here.
I am wondering if anyone has suggestions on where to volunteer one’s time (not money)
Has this been discussed much by EA people?
Upvoted. I also thought this might be relevant.
There may be non-scientific or engineering advances (maybe coming from economies of scale) that you can take of advantage of and I think an expert can predict that they will be taken advantage of and yes prices will drop.
In fact there seems a decent chance from what I just read online that engineering alone can push the price of solar below current energy prices.
So I have to scrap my claim about prices.
Maybe some folks even claim you can get to 90% renewables usage with engineering alone, although this guy is not an expert. But I don’t have much faith in this guy compared with the expert study he criticizes, so I think that renewables cannot exceed half our energy generation without either new science or big rises in energy costs. (This is not only because of prices but because of intermittency.) And I think the chance of never getting new science is at least 5%.
The problem seems to be that given the lag until engineering peters out, you are talking about extremely long bets. Maybe we might be dead. Even so, I think Stuart Armstrong’s claim that we will not be constrained by energy is suspect.
If you save all or most of the resource, I agree the cost will be huge. But if you only save a little, if it could somehow be done securely for ages, I’d say it’s worth it as insurance.
Added 2/25-This last statement assumes there are actually uses for fossil fuels where it would be economical to pay 1/(5%) = 20 times as much as today’s market price. I don’t know if there are, or whether they would still exist in a future world.
Any useful model does not try to analyze growth hundreds or thousands of years in the future. It is not really important for this topic, but anyway you can read the link in my last message if you’re interested.
On the definition of scientific stagnation-a decent measure would be that productivity growth stops. (Although maybe better engineering can also improve productivity without better science, but I don’t know if this makes a difference.)
Why is there a need for a disaster? Maybe cheap renewable energy turns out to be too hard a problem. Maybe more big scientific discoveries in general is too hard. That would be a departure from trends but trends have not continued enough for us to have absolute confidence in them.
But I might not actually differ with your 1% figure by too much-I’d guess a 5% chance that renewable energy prices now are roughly as low as they will ever be.
Even if you say 1%, however, it’s not clear to me that enough is being done on this topic, since if that 1% materializes, the waste could be huge.
[deleted]
How low? And, the issue is not so much that stagnation will definitely happen but that other assumptions are random noise and we cannot do any better than to assume stagnation.
If we assume a high chance of stagnation, and also that all generations are morally equal, then I think we should cut back fossil fuel use, and employ it only when greatly needed, in order to maximize the total usefulness to future generations. That is, it’s better that 50 generations use 1⁄50 the fuel for vital, truly essential purposes rather than one generation burning everything.
Yes you can increase capital stock and devote more people. But under these assumptions, maybe more resource production, or extraction, is actually neutral or even bad.
One consideration is how much society would be harmed if we ran out of various resources. That’s the limiting case. Here’s a Quora question on fossil fuels that might be a start.
There seems to be a literature on intergenerational equity and exhaustible resources-probably there are answers there, if a person dug enough. :)
I happen to be quite skeptical of predicting science. Do you know what sort of conclusion you would reach about this post’s topic if we assume that, for the foreseeable future, science will not advance much? Or, a case of scientific stagnation.
Nice Vox article on climate change-I felt that the argument was robust. Climate change may not end civilization but if humans lose 5% of their vitality over the next 10,000 years, that is terrible.
Stuart Armstrong relies on the price of solar continuing to drop, right? Maybe it will, but I think we would be wise to plan for if it does not. Plus, what about storing the energy? Overall, I would just note that fossil fuels have been proved to be very useful but (without new tech) they will eventually become scarce. So I do think stockpiling them would be good if it could be done securely. But the storage time needed might be extremely long, maybe hundreds of years. On whether any sort of coordination or planning would be effective over that that long a period, I am not too optimistic.
Even if growth were bad or neutral, there would have to be specific activities that were bad, and other activities that remained good. So how does this differ from just telling folks to look for ways that their society might hurt itself, or ways that they might be contributing to this antisocial behavior? There is a lot of disagreement about which behaviors, exactly, are antisocial.
I do worry that given enough time, industrialized countries will, um, self-destruct by using nuclear weapons. But in that case the remedy would probably not be giving up industrialization. That seems like too high a cost.
It’s also possible that growth may not be that important because growth is becoming much harder or impossible. But is it?
One point you make is that during the last 200 years growth has helped. Without strong evidence against it, it seems hard to make any assumption but that trends continue. So I think growth is good; growing societies will either be looked to and emulated by other groups that want the same rewards, or else powerful growing societies will just conquer other weaker ones. Either way, growth seems like the winning strategy.