I want to add a couple important points from the Vox article that weren’t explicit in your comment.
-This proposal was discarded
-The professional field scores were not necessarily supposed to be measuring intelligence. PELTIV was intended to measure many different things. To me professional field fits more into the “value aligned” category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.
I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.
In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness.
Absolutely true that it was ultimately not used and AI safety is higher priority for leadership. But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas.
The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as “NPCs” (people who don’t ultimately matter) is completely unacceptable.
Thanks for sharing this important information!
I want to add a couple important points from the Vox article that weren’t explicit in your comment.
-This proposal was discarded
-The professional field scores were not necessarily supposed to be measuring intelligence. PELTIV was intended to measure many different things. To me professional field fits more into the “value aligned” category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.
I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.
Absolutely true that it was ultimately not used and AI safety is higher priority for leadership. But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas.
The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as “NPCs” (people who don’t ultimately matter) is completely unacceptable.
Yes, as I shared below, I intended my statement to be anecdotal evidence that leaves a lot of room for other perspectives.
Although the only NPCs quote I noticed was referring to something else. Can you share the NPCs quote that referred to what we are discussing?