Last nontrivial update: 2024-02-01.
Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform
Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.
I’m interested in ways to increase the EV of the EA community by mitigating downside risks from EA related activities. Without claiming originality, I think that:
Complex cluelessness is a common phenomenon in the domains of anthropogenic x-risks and meta-EA (due to an abundance of crucial considerations). It is often very hard to judge whether a given intervention is net-positive or net-negative.
The EA community is made out of humans. Humans’ judgement tends to be influenced by biases and self-deception. That is a serious source of risk, considering the previous point.
Some potential mitigations involve improving some aspects of how EA funding works, e.g. with respect to conflicts of interest. Please don’t interpret my interest in such mitigations as accusations of corruption etc.
Feel free to reach out by sending me a PM. I’ve turned off email notifications for private messages, so if you send me a time sensitive PM consider also pinging me about it via the anonymous feedback link above.
This comment was written quickly and can easily contain errors and inaccuracies.
I haven’t read the post, but here’s a model that may be useful:
Nationalism is not a naturally occurring phenomenon. It is a goal optimized for by NatSec elites (the people who C. Wright Mills called “warlords”). In “democracies” that have a powerful NatSec community, nationalism can help NatSec elites gain more power by legitimizing a conflict. (Conflicts can be extremely useful for NatSec elites in “democracies” for gaining more power.)
(Perhaps some researchers/leaders in AGI labs should be considered “NatSec elites” for the purpose of this comment.)