P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)
Disagree: as a software engineer, my prior for the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don’t live in a simulation. (The only plausible exception is if I am the only person in the simulation.)
I like Toby’s point—seems like the prior about “one person’s influence over the future” should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive today is well taken.
Meanwhile on the topic of “having the prerequisite knowledge necessary to positively impact the long-term future”, that quantity has been increasing over time, particularly in the last century, given developments in science, philosophy, rationality etc., and that quantity will certainly increase in the coming centuries provided that civilization survives that long. Therefore, in consideration of how society has neglected X-risks and civilization-destroying risks, this point in time seems very hingey in the sense that we can probably already take actions that predictably and non-negligibly affect cataclysmic risk levels, and these actions may determine whether or not society survives long enough to reach a future time when our cluelessness is reduced, and our knowledge and values are improved.
Something I didn’t see mentioned in the above discussion is the idea that hingeyness may be unclear even in hindsight. Certainly before the 19th century there is an argument to be made that one could have little impact on the future unless one was, say, Isaac Newton, and even then one’s impact was perhaps just to bring science to people a little earlier than would have happened otherwise. But what’s more hingey, the 19th or 20th century? Well, when it comes to X-risks, there was no atomic bomb until after modern physics was discovered in the early 20th century, and therefore no MAD cold war… no risk of superbugs until modern medicine, etc. When it comes to risk against civilization, the 20th century seems more hingey than the 19th, but on other topics (like when the best time to be a scientist or engineer is) it is less obvious.
Certain early choices had a lot of impact. A classic example is the Qwerty keyboard; on the other hand this layout was the choice of just one or two people, a choice that no one else could have influenced—this reminds me of a general problem with the 19th century: opportunities to have an impact were rare, because there was e.g. no government funding for science. Note that a successor keyboard like Dvorak could have been designed by vastly more people, so I wonder if things could have gone differently, e.g. what if someone had gone with the flow like I did with my own keyboard design, would it have sold better? What if it was sold in the 1920s instead of the 1930s? Or consider Esperanto—almost anyone could design a language. I heard that Esperanto was largely forgotten when WWI happened, but what if a commander in the allies knew about it, and observed that troops could communicate better if they had a common language? If we had a common language today, surely the world would be different—it’s hard to be sure that it would be better, but today many people have to spend vast amounts of time learning English before they can meaningfully affect the course of history.
So I’d say overall that the 20th century was much more hingey, though it’s hard to see how to assign credit—do we credit scientists for what they discovered, politicians for what policies they instituted that created funding for science, public servants for how they moderated new institutions, lawyers for the important cases they argued, activists for helping influence elections that led to policy, engineers for what they created, or companies that funded engineers? And what if communist China ultimately has the greatest impact, either by precipitating another world war, or by overturning democracy and free speech in favor of a authoritarian global regime in which the definition of truth can be chosen by the leadership?
So generally I think the knowledge we gather in the future will be crucial for our long-term future, but the things we do today will lay the foundation for that future, and perhaps this is the best thing to focus on: laying down a good foundation.
Each of us can contribute in our own way. As a software engineering veteran, I hope to contribute by designing foundational software, which could potentially act as an accelerator that brings benefits of the future more quickly to the present (my impact is no doubt eclipsed, however, by Steve Krause of Future of Coding who succeeded, where I failed, in building a community, or by Bret Victor who inspired countless people). If you work in medicine you might work on containing the risk of superbugs; if in politics you there are any number of causes that might help build a stable and prosperous world… we may be clueless now, but there are things we know, like: stability and prosperity good, war and catastrophe bad. And while rationalism is in its infancy, I think we have enough epistemological tools to point us in the right directions (my life might have gone quite differently if I had discovered rationalism and EA and left my religion fifteen years earlier!)
In any case, I’m not sure why we should be concerned with how hingey this century is—at least it’s probably more hingey than the last century, and in any case we have to play the hand you’re dealt. We are clueless about a great many things, but not about everything, suggesting a two-pronged course of action: first to work on reducing cluelessness (and figuring out how to act in the face of cluelessness), and second to help the future in ways we can understand, such as by reducing catastrophic risks.
Disagree: as a software engineer, my prior for the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don’t live in a simulation. (The only plausible exception is if I am the only person in the simulation.)
I like Toby’s point—seems like the prior about “one person’s influence over the future” should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive today is well taken.
Meanwhile on the topic of “having the prerequisite knowledge necessary to positively impact the long-term future”, that quantity has been increasing over time, particularly in the last century, given developments in science, philosophy, rationality etc., and that quantity will certainly increase in the coming centuries provided that civilization survives that long. Therefore, in consideration of how society has neglected X-risks and civilization-destroying risks, this point in time seems very hingey in the sense that we can probably already take actions that predictably and non-negligibly affect cataclysmic risk levels, and these actions may determine whether or not society survives long enough to reach a future time when our cluelessness is reduced, and our knowledge and values are improved.
Something I didn’t see mentioned in the above discussion is the idea that hingeyness may be unclear even in hindsight. Certainly before the 19th century there is an argument to be made that one could have little impact on the future unless one was, say, Isaac Newton, and even then one’s impact was perhaps just to bring science to people a little earlier than would have happened otherwise. But what’s more hingey, the 19th or 20th century? Well, when it comes to X-risks, there was no atomic bomb until after modern physics was discovered in the early 20th century, and therefore no MAD cold war… no risk of superbugs until modern medicine, etc. When it comes to risk against civilization, the 20th century seems more hingey than the 19th, but on other topics (like when the best time to be a scientist or engineer is) it is less obvious.
Certain early choices had a lot of impact. A classic example is the Qwerty keyboard; on the other hand this layout was the choice of just one or two people, a choice that no one else could have influenced—this reminds me of a general problem with the 19th century: opportunities to have an impact were rare, because there was e.g. no government funding for science. Note that a successor keyboard like Dvorak could have been designed by vastly more people, so I wonder if things could have gone differently, e.g. what if someone had gone with the flow like I did with my own keyboard design, would it have sold better? What if it was sold in the 1920s instead of the 1930s? Or consider Esperanto—almost anyone could design a language. I heard that Esperanto was largely forgotten when WWI happened, but what if a commander in the allies knew about it, and observed that troops could communicate better if they had a common language? If we had a common language today, surely the world would be different—it’s hard to be sure that it would be better, but today many people have to spend vast amounts of time learning English before they can meaningfully affect the course of history.
So I’d say overall that the 20th century was much more hingey, though it’s hard to see how to assign credit—do we credit scientists for what they discovered, politicians for what policies they instituted that created funding for science, public servants for how they moderated new institutions, lawyers for the important cases they argued, activists for helping influence elections that led to policy, engineers for what they created, or companies that funded engineers? And what if communist China ultimately has the greatest impact, either by precipitating another world war, or by overturning democracy and free speech in favor of a authoritarian global regime in which the definition of truth can be chosen by the leadership?
So generally I think the knowledge we gather in the future will be crucial for our long-term future, but the things we do today will lay the foundation for that future, and perhaps this is the best thing to focus on: laying down a good foundation.
Each of us can contribute in our own way. As a software engineering veteran, I hope to contribute by designing foundational software, which could potentially act as an accelerator that brings benefits of the future more quickly to the present (my impact is no doubt eclipsed, however, by Steve Krause of Future of Coding who succeeded, where I failed, in building a community, or by Bret Victor who inspired countless people). If you work in medicine you might work on containing the risk of superbugs; if in politics you there are any number of causes that might help build a stable and prosperous world… we may be clueless now, but there are things we know, like: stability and prosperity good, war and catastrophe bad. And while rationalism is in its infancy, I think we have enough epistemological tools to point us in the right directions (my life might have gone quite differently if I had discovered rationalism and EA and left my religion fifteen years earlier!)
In any case, I’m not sure why we should be concerned with how hingey this century is—at least it’s probably more hingey than the last century, and in any case we have to play the hand you’re dealt. We are clueless about a great many things, but not about everything, suggesting a two-pronged course of action: first to work on reducing cluelessness (and figuring out how to act in the face of cluelessness), and second to help the future in ways we can understand, such as by reducing catastrophic risks.