This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I’d like to notice the quote:
Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.
This seems entirely right, considering society’s take on these sorts of things. I’d suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.
This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!
At the risk of distracting from the main point of this article, I’d like to notice the quote:
This seems entirely right, considering society’s take on these sorts of things. I’d suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.