I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don’t know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things.
So for non core EA, we notice things that seem wrong, and we’re afraid to speak up against it, and it sucks. That’s what this post is about.
And course it’s naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.
I don’t agree with everything in the post. Lots of the suggestions seems nonsensical in the way you point out. But I agree with then notion of “can we please talk about this”. Even just to acknowledge that some of these problems do exist.
It’s much easier to build support for a solution if there is a common knowledge that the problem exists. When I started organising in the field of AI Safety, I was focusing on solving problems that wasn’t on the map for most people. This caused lots of misunderstandings which made it harder to get funded.
This is correct. But projects have happened because there is a widespread knowledge of a specific problem, and then someone else deciding to design their own project to solve that problem.
This is why it is valuable to have an open conversation to create a shared understanding of what are the current problems EA should focus on. This include discussions about cause prioritisation, but also discussion about meta/community issues.
In that spirit I want to point out that it seems to me that core EAs has no understanding of what things look like (what information is available, etc) from a non core EA perspective, and vise versa.