One thing that bothers me about epistemics discourse is that there’s this terrible effect of critics picking weak low status EAs as opponents for claims about AI risk and then play credentialism games. I wish there was parity matching of claims in these discussions so they wouldn’t collapse.
One thing that bothers me about epistemics discourse is that there’s this terrible effect of critics picking weak low status EAs as opponents for claims about AI risk and then play credentialism games. I wish there was parity matching of claims in these discussions so they wouldn’t collapse.