Each individual could be packed with an emergency survival strategy. Just like or reptilian brain kicks in when we sense danger, our in-group favouritism and specificity would kick in when we sense existential catastrophe. Like an ant colony that produces more queens every time it senses danger. Society would become some metamorphic superorganism with local fluctuations that would rapidly turn into divergent groups when existential risk increases, only to merge again whenever the risk lowers.
Wait, I thought the entire point of enforcing global altruism was to guard against existential risks? Maybe depending on the nature of the risk we either do or don’t want global altruism?
Anyway, I had a hard time understanding much of this post… simpler language and more concrete examples might have been helpful. I was reminded of this recent article by Steven Pinker on problems with academic writing in general. I recently took the time to gather some thoughts on clear writing if you’re interested. Pinker also recently wrote a book about how to write well which might be good. I suspect that writing clearly and simply could be an easy way to get noticed in academia where most writing is terrible.
Wait, I thought the entire point of enforcing global altruism was to guard against existential risks? Maybe depending on the nature of the risk we either do or don’t want global altruism?
Anyway, I had a hard time understanding much of this post… simpler language and more concrete examples might have been helpful. I was reminded of this recent article by Steven Pinker on problems with academic writing in general. I recently took the time to gather some thoughts on clear writing if you’re interested. Pinker also recently wrote a book about how to write well which might be good. I suspect that writing clearly and simply could be an easy way to get noticed in academia where most writing is terrible.