The last chapter of Global Catastrophic Risks (Bostrom and Circovic) covers global totalitarianism. Among other things they mention how improved lie-detection technology, anti-aging research (to mitigate risks of regime change), and drugs to increase docility in the population could plausibly make a totalitarian system permanent and stable. Obviously an unfriendly AGI could easily do this as well.
That’s an interesting point about prediction markets. We individuals tend to invest in the stock market even when we know the market as a whole is wiser than us as individuals, because on the whole the market goes up, and anyways there are ways to track overall market performance. For prediction markets, I suppose there would need to be similar incentives somehow, otherwise every individual who doesn’t have special information would be better off predicting what the overall market predicts, which doesn’t help.
I’m guessing I just don’t understand how prediction markets work. Hoping someone will correct me.
For people who know how politics works: are petitions ever effective? Or writing letters to people-who-matter? Or something else?
Nitpick: On the “How” tab of the site, it should be “Humanity’s autonomy”, not “Humanities autonomy”.
And you’re right, I want to apologize for my partisan framing of an issue which really need not be partisan. Paper ballots should be required regardless of the current situation, at least from what I understand.
This is an excellent post. I’ve been struggling myself to understand to what extend deontological values and the inherent irrationality of humans need to be factored into consequentialist decision making. I’ve become more and more convinced that values and social norms matter much more than I had previously thought.
If we pull the camera back far enough, my guess is in a generation or two America will be on a good track again, so long as Trump doesn’t start a war or use our nuclear warheads. As the White House Press Secretary said, the institutions of the U.S. have survived a civil war, 2 world wars, and the Great Depression. This will be a bad 4 years with adverse consequences on the rest of the world. Putin will be on the offensive, both in reality and in cyberspace. And U.S. carbon emissions will increase. But I still believe AI risk is the most dangerous threat to humanity.
Hey! College student here, studying math and Russian. What’s the best way I can help get EA to catch on in Russia? VK posts?