We expect e/acc to compile as “scary” for many EAs, although that’s not the goal. We think EA has a lack of focus and is missing an element of willingness to accept the terms of the deal in front of humanity — i.e. to be good stewards of a consciousness-friendly technocapital singularity or die trying.
Unlike EA, e/acc
Doesn’t advocate for modernist technocratic solutions to problems
Isn’t passively risk-averse in the same way as EAs that “wish everything would just slow down”
Isn’t human-centric — as long as it’s flourishing, consciousness is good
Isn’t in denial about how fast the future is coming
Rejects the desire for a panopticon implied by longtermist EA beliefs
Like EA, e/acc:
Is prescriptive
Values more positive valence consciousness as good
Values zero recognizable consciousness in the universe as the absolute worst outcome.
I agree with some of these allegedly not EA ideas and disagree wih some of the allegedly EA ones (“more positive valence consciousness = good”). But I’m not sure the actual manifesto has anything to do with any of these.
It’s something that was recently invented on Twitter, here is the manifesto they wrote: https://swarthy.substack.com/p/effective-accelerationism-eacc?s=w
It’s only believed by a couple people afaict, and unironically maybe by no one (although this doesn’t make it unimportant!)
I agree with some of these allegedly not EA ideas and disagree wih some of the allegedly EA ones (“more positive valence consciousness = good”). But I’m not sure the actual manifesto has anything to do with any of these.