One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the “kill everyone” bullet.
This would make the EA movement potentially existentially dangerous. Even if we don’t agree with the human extinction radicals, people might split off from the movement and end up supporting it. One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.
One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the “kill everyone” bullet.
Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they’ve recently read.
Even if we don’t agree with the human extinction radicals, people might split off from the movement and end up supporting it.
Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it’s probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).
One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.
Wasn’t the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that’s a pretty solid prior in most cases.
Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they’ve recently read.
Foggy worldviews tend to flip people around based on raw emotions, tribalism, nationalism, etc. None of these are likely to get you to the position “I should implement a long term machievellian scheme to kill every human being on the planet”. The obvious point being that “every human on the planet” includes ones family, friends, and country, so almost anyone operating on emotions will not pursue such a goal.
On the other hand, utilitarian math can get to “kill all humans” in several ways, just by messing around with different assumptions and factual beliefs. Of course, I don’t agree with those calculations, but someone else might. If we convince everyone on earth that the correct thing to do is “follow the math”, or “shut up and calculate”, then some subset of them will have the wrong assumptions, or incorrect beliefs, or just mess up the math, and conclude that they have a moral obligation to kill everyone.
One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the “kill everyone” bullet.
This would make the EA movement potentially existentially dangerous. Even if we don’t agree with the human extinction radicals, people might split off from the movement and end up supporting it. One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.
Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they’ve recently read.
Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it’s probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).
Wasn’t the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that’s a pretty solid prior in most cases.
Foggy worldviews tend to flip people around based on raw emotions, tribalism, nationalism, etc. None of these are likely to get you to the position “I should implement a long term machievellian scheme to kill every human being on the planet”. The obvious point being that “every human on the planet” includes ones family, friends, and country, so almost anyone operating on emotions will not pursue such a goal.
On the other hand, utilitarian math can get to “kill all humans” in several ways, just by messing around with different assumptions and factual beliefs. Of course, I don’t agree with those calculations, but someone else might. If we convince everyone on earth that the correct thing to do is “follow the math”, or “shut up and calculate”, then some subset of them will have the wrong assumptions, or incorrect beliefs, or just mess up the math, and conclude that they have a moral obligation to kill everyone.