Ah sorry, it’s a bit of linguistic shortcut, I’ll try my best to explain more clearly:
As David says, it’s an idea from Chinese history. Rulers used the concept as a way of legitimising their hold on power, where Tian (Heaven) would bestow a ‘right to rule’ on the virtuous ruler. Conversely, rebellions/​usurpations often used the same concept to justify their rebellion, often by claiming the current rulers had lost heaven’s mandate.
Roughly, I’m using this to analogise to the state of EA, where AI Safety and AI x-risk has become an increasingly large/​well-funded/​high-status[1] part of the movement, especially (at least apparently) amongst EA Leadership and the organisations that control most of the funding/​community decisions.
My impression is that there was a consensus and ideological movement amongst EA leadership (as opposed to an already-held belief where they pulled a bait-and-switch), but many ‘rank-and-file’ EAs simply deferred to these people, rather than considering the arguments deeply.
I think that various amounts of scandals/​bad outcomes/​bad decisions/​bad vibes around EA in recent years and at the moment can be linked to this turn towards the overwhelming importance of AI Safety, and as EffectiveAdvocate says below, I would like that part of EA to reduce its relative influence and power on the rest of it, and for rank-and-file EAs to stop deferring on this issue especially, but also in general.
Ah sorry, it’s a bit of linguistic shortcut, I’ll try my best to explain more clearly:
As David says, it’s an idea from Chinese history. Rulers used the concept as a way of legitimising their hold on power, where Tian (Heaven) would bestow a ‘right to rule’ on the virtuous ruler. Conversely, rebellions/​usurpations often used the same concept to justify their rebellion, often by claiming the current rulers had lost heaven’s mandate.
Roughly, I’m using this to analogise to the state of EA, where AI Safety and AI x-risk has become an increasingly large/​well-funded/​high-status[1] part of the movement, especially (at least apparently) amongst EA Leadership and the organisations that control most of the funding/​community decisions.
My impression is that there was a consensus and ideological movement amongst EA leadership (as opposed to an already-held belief where they pulled a bait-and-switch), but many ‘rank-and-file’ EAs simply deferred to these people, rather than considering the arguments deeply.
I think that various amounts of scandals/​bad outcomes/​bad decisions/​bad vibes around EA in recent years and at the moment can be linked to this turn towards the overwhelming importance of AI Safety, and as EffectiveAdvocate says below, I would like that part of EA to reduce its relative influence and power on the rest of it, and for rank-and-file EAs to stop deferring on this issue especially, but also in general.
I don’t like this term but again, I think people know what I mean when I say this