I’m very sorry to hear about your dad. I hope those who would have voted for PauseAI in the donation election will consider donating to you directly.
On the points you raise, one thing stands out to me: you mention how hard it is to convince EAs that your arguments are right. But the way you’ve written this post (generalising about all EAs, making broad claims about their career goals, saying you’re already beating them in arguments) suggests to me you’re not very open to being convinced by them either. I find this sad, because I think that PauseAI is sitting in an important space (grassroots AI activism), and I’d hope the EA community & the PauseAI community could productively exchange ideas.
What I feel upset about is that EA isn’t the kind of group anymore that wanted to do grassroots advocacy for AI Safety. Early EA would have been all over it. Now EAs want to be part of building AI. I’m not wanting like a trade where you listen to me and I listen to you in exchange. I know your arguments in and out. You are just wrong and you don’t care about finding out what is right— you’re protecting your conclusions. That’s a betrayal to yourselves.
There’s many different people in EA with different takes.
By claiming “you are just wrong” in second person plural you are making it harder to people that are not in the “want to build AI” camp to engage with your object level arguments.
Why don’t you defend your point?
I imagine the people that are not part of the AI safety memeplex already could find them convincing. Why not engage with then?
Btw I’m undecided on what the right marginal actions are wrt AI and am trying to form my inside view.
Maybe reconsider whether EA is the right community for you if you don’t agree with the agenda of the people at the top. They are setting your ability to think critically in many places, with who they fund, who is treated as cool and respected as an expert, etc.
You’re right, part of the problem is that you feel lumped in with them even if you have no decisionmaking power over what they do. Don’t fight their battles for them if you don’t even agree— let go of the baggage and think for yourself.
I don’t identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.
I agree with you that there’s a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.
My problem with this post is that the way of presenting the arguments is like “wake up, I’m right and you are wrong”, directed to a group of people that includes people that have never thought about what you’re talking about, and people that agree with you.
I also agree that the truth sometimes irritates, but that doesn’t mean that if something irritates I should trust it more.
you are threatening not to care about a problem in the world because I made you uncomfortable
Is this directed at me? Because I didn’t want to do this, and I don’t see why you think I did this (like, I clearly never threatened not to care about a problem?).
If I take the way that you’ve used “you” in your post and in the comments here seriously, you’ve said a bunch of things that I believe are clearly not true:
you want me to beg you to please consider it as a favor [I don’t want to do this]
I know your arguments in and out. [we’ve never talked about this together]
you don’t care about finding out what is right [I actually do]
Now it’s about working at an AI lab or wishing you could work at an AI lab. [I don’t wish to do that]
I’m already beating you and you just define the game so that the conclusion of moving toward advocacy can’t win. [we’ve never played any games]
you’re tedious to deal with [this one is true, but this is incidental, not sure why you know this]
I’m very sorry to hear about your dad. I hope those who would have voted for PauseAI in the donation election will consider donating to you directly.
On the points you raise, one thing stands out to me: you mention how hard it is to convince EAs that your arguments are right. But the way you’ve written this post (generalising about all EAs, making broad claims about their career goals, saying you’re already beating them in arguments) suggests to me you’re not very open to being convinced by them either. I find this sad, because I think that PauseAI is sitting in an important space (grassroots AI activism), and I’d hope the EA community & the PauseAI community could productively exchange ideas.
You’re right, I’m not.
What I feel upset about is that EA isn’t the kind of group anymore that wanted to do grassroots advocacy for AI Safety. Early EA would have been all over it. Now EAs want to be part of building AI. I’m not wanting like a trade where you listen to me and I listen to you in exchange. I know your arguments in and out. You are just wrong and you don’t care about finding out what is right— you’re protecting your conclusions. That’s a betrayal to yourselves.
There’s many different people in EA with different takes.
By claiming “you are just wrong” in second person plural you are making it harder to people that are not in the “want to build AI” camp to engage with your object level arguments.
Why don’t you defend your point?
I imagine the people that are not part of the AI safety memeplex already could find them convincing. Why not engage with then?
Btw I’m undecided on what the right marginal actions are wrt AI and am trying to form my inside view.
Maybe reconsider whether EA is the right community for you if you don’t agree with the agenda of the people at the top. They are setting your ability to think critically in many places, with who they fund, who is treated as cool and respected as an expert, etc.
You’re right, part of the problem is that you feel lumped in with them even if you have no decisionmaking power over what they do. Don’t fight their battles for them if you don’t even agree— let go of the baggage and think for yourself.
I feel lumped in with them because you use second person plural. It’s not a glitch, it’s a direct consequence of how you write.
What I say is: maybe you’re right with the pause agenda, I don’t know.
But if you come to a group of people saying “you are just wrong” this is not engaging, and then I feel irritated instead of considering your case.
You feel lumped in with them bc you identify as an EA.
Sometimes the truth irritates.
I don’t identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.
I agree with you that there’s a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.
My problem with this post is that the way of presenting the arguments is like “wake up, I’m right and you are wrong”, directed to a group of people that includes people that have never thought about what you’re talking about, and people that agree with you.
I also agree that the truth sometimes irritates, but that doesn’t mean that if something irritates I should trust it more.
Is this directed at me? Because I didn’t want to do this, and I don’t see why you think I did this (like, I clearly never threatened not to care about a problem?).
If I take the way that you’ve used “you” in your post and in the comments here seriously, you’ve said a bunch of things that I believe are clearly not true:
No actually I posted that response under the wrong comment— sorry!