This is still under very active development, but the github repository is here, and a toy version of what we’d like to produce with better estimates is here, as a Rshiny App.
This is really fantastic, and seems like there is a project that could be done as a larger collaboration, building off of this post.
It would be a significant amount of additional work, but it seems very valuable to list resources relevant to each question—especially as some seem important, but have been partly addressed. (For example, re: estimates of natural pandemic risks, see my paper, and then Andrew Snyder-Beattie’s paper.)
Given that, would you be interested in having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document?
This sounds slightly related to something 1DaySooner is just starting, which is a risk model for a HCT, which will look at the risk of death, and hopefully also of long term disability. Ideally, it would also consider the probability conditional on rescue therapies being available or becoming available. To do that, we’re focusing on a population subset, but the basis for the model is data that includes multiple ages, so extending that is easy.
It is likely that this model can be plugged into models for the other portions of the risk, isolation, etc. and it might be useful to collaborate. It’s also an important project on its own, so if there are people interested in working with us on that, I’d be happy to find more volunteers familiar with R and data analysis.
I’ll speak for the consensus when I say I think there’s not a clear way to decide if this is correct without actually doing it—and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI’s viewpoints, or to Will’s.) Even if we picked from a “fair” reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced—though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.
Lastly, I have tremendous respect for Will, but I don’t know that he’s calibrated particularly well to make a prediction like this. (Not that I know he isn’t—I just don’t have any reason to think he’s spent much time working on this skillset.)
Yes, but it is hard, and they don’t work well. They can, however, be done at least slightly better.
Good Judgement was asked to forecast the risk of a nuclear war in the next year—which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.
Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can’t find a link to a public document, but here’s one example (which may have been someone else’s suggestion):
You ask people to forecast what probability people will assign in 5 years to the question “will there be a nuclear war by 2100?” (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question “will there be a nuclear war by 2100″ - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.
There is other work in this vein, but to simplify, all of it takes the form “can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions.” As far as I can see, there aren’t any simple answers.
I disagree somewhat on a few things, but I’m not very strongly skeptical of any of these points. I do have a few points to consider about these issues.
Re: stable long term despotism, you might look into the idea of “hydraulic empires” and their stability. I think that short of having a similar monopoly, short of a global singleton, other systems are unstable enough that they should evolve towards whatever is optimal. However, nuclear weapons, if developed early by one state, could also create a quasi-singularity. And I think the Soviet Union was actually less stable than it appears in retrospect, except for their nuclear monopoly.
I do worry that some aspects of central control would be more effective at creating robust technological growth given clear tech ladders, compared to the way uncontrolled competition works in market economies, since markets are better at the explore side of the explore-exploit spectrum, and dictatorships are arguably better at exploitation. (In more than one sense.)
Re: China, the level of technology is stabilizing their otherwise fragile control of the country. I would be surprised if similar stability is possible longer term without either a hydraulic empire, per above, or similarly invasive advanced technologies—meaning that they would come fairly late. It’s possible faster technology development would make this more likely.
In retrospect, 1984 seems far less worrying than a Brave New World—style anti-utopia. (But it’s unclear that lots of happy people guided centrally is actually as negative as it is portrayed, at least according to some versions of utilitarianism.)
“The right question” has 2 components. First is that the thing you’re asking about is related to what you actually want to know, and second is that it’s a clear and unambiguously resolvable target. These are often in tension with each other.
One clear example is COVID-19 cases—you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made—not to mention relying on data that might not be gathered as of when you want to resolve the question.
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.
How does the distribution skill / hours of effort look for forecasting for you?
I would say there’s a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn’t above, say, the 10th percentile.) After that, it’s mostly effort, and skill that is gained via feedback.
I already said I’d stop messing with him now.
I’m very uncertain about details, and have low confidence in all of these claims we agree about, but I agree with your assessment overall.
I’ve assumed that while speed changes, the technology-tree is fairly unalterable—you need goods metals and similar to make many things through 1800s-level technology, you need large-scale industry to make good metals, etc. But that’s low confidence, and I’d want to think about it more. (This paper looks interesting: http://gamestudies.org/1201/articles/tuur_ghys.)
Regarding political systems, I think that market economies with some level of distributed control, and political systems that allow feedback in somewhat democratic ways are social technologies that we don’t have clear superior alternatives to, despite centuries of thought. I’d argue that Fukuyama was right in “End of History” about the triumph of democracy and capitalism, it’s just that the end state seems to take longer than he assumed.
And finally, yes, the details of how they technologies and social systems play out in terms of cosmopolitan attitudes and the societal goals they reflect are much less clear. In general, I think that humans are far more culturally plastic than people assume, and very different values are possible and compatible with flourishing in the general sense. But (if it were possible to know the answer,) I wouldn’t be too surprised to find out that nearly fixed tech trees + nearly fixed social technology trees mean that cosmopolitan attitudes are a very strong default, rather than an accidental contingent reality.
I was focusing on “how much similarity we should expect between a civilization that has recovered and one that never collapsed in the first place,” and I was saying that the degree of similarity in terms of likely progress is low, conditioning on any level of societal memory of the idea that progress is possible, and knowing (or seeing artifacts of the fact) that there once were billions of people who had flying machines and instant communication.
I think there’s a clear counterargument, which is that the central ingredient lacking in developing technologies was a lack of awareness that progress in a given area is possible. Unless almost literally all knowledge is destroyed, a recovery doesn’t have this problem.
(Note: this seems to be a consensus view among people I talk to who have thought about collapse scenarios, but I can claim that only very loosely, based on a few conversations.)
You still seem confused. You say your views are controversial, as if this community doesn’t allow for and value controversial opinions, and think that it’s the claims you made. That is not the case. Hopefully this comment is clear enough to explain.
1. This was a low-effort post. It was full of half-formed ideas, contained neither a title or a introduction that related to the remainder of the post, nor a clear conclusion. The sentences were not complete, and there was clearly no grammar check.
2. Look at successful posts on the forum. They contain full sentences, have a clear topic and thoughts about a topic that are explained clearly, and engage with past discussion. It’s important to notice the standards in a given forum before participating. In this case, you didn’t bother looking at other posts or understanding the community norms.
3. You have not engaged with other posts, and may not have even read them. Your first attempt to post or comment reflects that lack of broader engagement. You have no post history to make people think you have given this any thought whatsoever.
4. Your unrelated comments link to your other irrelevant work, which seems crass.
I think 30 years is an overstatement, thought it’s hard to quantify. However, I can think of a few things that makes me think this gap is likely to exist, and be significant in cryptography, and even more specifically in cryptanalysis. For hacking, the gap is clearly smaller, but a still nontrivial amount—perhaps 2 years.
Maybe this wasn’t your intent, but the title is a bit ambiguous about the word “inspire”—it seems as though you might be advocating for actions that inspire disasters, as opposed to making the case for allowing disasters that are themselves inspiring.
Regarding 3, no, it’s unclear and depends on the specific animal, what we think their qualia are like, and the specific class of experience you think are valuable.
It’s a bit more complex than that. If you think animals can’t anticipate pain, or can anticipate it but cannot understand the passage of time, or understand that pain might continue, you could see an argument for animal suffering being less important than human suffering.
So yes, this could go either way—but it’s still a reason one might value animals less.
Correlation usually implies higher value in sources of outside variance, even if the mean is slightly lower. We should actively look for additional sources of high-value variance. And we often see that smart people outside of EA often have valuable criticisms, once we can get past the instinctive “we’re being attacked” response.
1) Different options or uncertainty about the moral relevance of different qualia.
It’s unclear that physical pain is the same experience for humans, cats, fish, and worms.
Even if it is the same mental experience, the moral value may differ due to the lack of memory or higher brain function. For example, I think there’s a good argument that pain that isn’t remembered, for instance via the use of Scopolamine, is (still morally relevant but) less bad than pain experienced that is remembered. Beings incapable of remembering or anticipating pain would have intrinsically less morally relevant experiences—perhaps far less.
2) Higher function as a relevant factor in assessing moral badness of negative experiences
I think that physical pain is bad, but when considered in isolation, it’s not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc. People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They’d also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.
3) Different value due to potential for positive emotion.
If joy and elation is only possible for humans, it may be that they have higher potential for moral value than animals. This would be true even if their negative potential was the same. In such a case, we might think that the loss of potential was morally important, and say that the death of a human, with the potential for far more positive experience, is more morally important than the death of an animal.
I strongly feel this is incorrect. Coordination is incredibly expensive, is already a major pain point and source of duplication and wasted effort, and having lots of self-directed go-getters will make that worse.