I worry that the pro-AI/slow-AI/stop-AI has the salient characteristics of a tribal dividing line that could tear EA apart:
“I want to accelerate AI” vs “I want to decelerate AI” is a big, clear line in the sand that allows for a lot clearer signaling of one’s tribal identity than something more universally agreeable like “malaria is bad”
Up to the point where AI either kills us or doesn’t, there’s basically in principle no way to verify that one side or the other is “right”, which means everyone can keep arguing about it forever
The discourse around it is more hostile/less-trust-presuming than the typical EA discussion, which tends to be collegial (to a fault, some might argue)
You might think it’s worth having this civl war to clarify what EA is about. I don’t. I would like for us to get on a different track.
I worry that the pro-AI/slow-AI/stop-AI has the salient characteristics of a tribal dividing line that could tear EA apart:
“I want to accelerate AI” vs “I want to decelerate AI” is a big, clear line in the sand that allows for a lot clearer signaling of one’s tribal identity than something more universally agreeable like “malaria is bad”
Up to the point where AI either kills us or doesn’t, there’s basically in principle no way to verify that one side or the other is “right”, which means everyone can keep arguing about it forever
The discourse around it is more hostile/less-trust-presuming than the typical EA discussion, which tends to be collegial (to a fault, some might argue)
You might think it’s worth having this civl war to clarify what EA is about. I don’t. I would like for us to get on a different track.
This thought prompted by discussion around one of Matthew_Barnett’s quick takes.
For what it’s worth, I really don’t think many EAs are in the AI accelerationist camp at least. Matthew Barnett seems fairly unusual to me here.
*Barnett
Sorry, fixed. Mistyped.