I largely agree; AI safety isn’t something that remains neglected, nor is it obvious that it’s particularly tractable. There is a lot of room for people to be working on other things, and there’s also a lot of room for people who don’t embrace cause neutrality or even effectiveness to work on AI safety.
In fact, in my experience many “EA” AI safety people aren’t buying in to cause neutrality or moral circle expansion, they are simply focused on a near-term existential threat. And I’m happy they are doing so—just like I’m happy that global health works on pandemic response. That doesn’t make it an EA priority. (Of course, it doesn’t make it not an EA priority either!)
Edit to add: I’m really interested in hearing what specifically the people who are voting disagree actually disagree with. (Especially since there are several claims here, some of which are normative and some of which are positive.)
Thanks David I agree its not obvious that it’s tractable, can you explain the argument for it not being neglected a bit, or point me to a link about that? I thought the number of people working on it was only in the high hundreds to low thousands?
I think that 6 months ago, “high hundreds” was a reasonable estimate. But there are now major governments that have explicitly said they are interested, and tons of people from various different disciplines are contributing. (The UK’s taskforce and related AI groups alone must have hired a hundred people. Now add in think tanks around the world, etc.) It’s still in early stages, but it’s incredibly hard to argue it’s going to remain neglected unless EA orgs tell people to work on it.
I largely agree; AI safety isn’t something that remains neglected, nor is it obvious that it’s particularly tractable. There is a lot of room for people to be working on other things, and there’s also a lot of room for people who don’t embrace cause neutrality or even effectiveness to work on AI safety.
In fact, in my experience many “EA” AI safety people aren’t buying in to cause neutrality or moral circle expansion, they are simply focused on a near-term existential threat. And I’m happy they are doing so—just like I’m happy that global health works on pandemic response. That doesn’t make it an EA priority. (Of course, it doesn’t make it not an EA priority either!)
Edit to add: I’m really interested in hearing what specifically the people who are voting disagree actually disagree with. (Especially since there are several claims here, some of which are normative and some of which are positive.)
Thanks David I agree its not obvious that it’s tractable, can you explain the argument for it not being neglected a bit, or point me to a link about that? I thought the number of people working on it was only in the high hundreds to low thousands?
I think that 6 months ago, “high hundreds” was a reasonable estimate. But there are now major governments that have explicitly said they are interested, and tons of people from various different disciplines are contributing. (The UK’s taskforce and related AI groups alone must have hired a hundred people. Now add in think tanks around the world, etc.) It’s still in early stages, but it’s incredibly hard to argue it’s going to remain neglected unless EA orgs tell people to work on it.
Thanks, interesting answer appreciate it.