Edit: This statement is about my personal experience in the biggest EA AI safety hub. It’s not intended to be anything more than anecdotal evidence, and leaves plenty of room for other experiences. Thanks to others for pointing out this wasn’t clear.
I’m part of the AI-oriented community this part is referring to, and have felt a lot of pressure to abandon work on other cause areas to work on AI safety (which I have rejected). In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness. A lot of the pressure is also not intentional but just comes from the fact that everyone around you is working on AI.
They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness.
I think this is an empirical question, and likely varies between communities, so “definitely do not...” seems too strong. For example, here’s Gregory Lewis, a fairly senior and well-respected EA, commenting on different cause areas (emphasis added):
Yet I think I’d be surprised if it wasn’t the case that among those working ‘in’ EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.
I wouldn’t be surprised if other people shared this view.
Thanks for sharing! Yeah I meant that only to refer to the people I know well enough to know their opinions and the general vibe I’ve gotten in the biggest EA AI safety hub. Mine is just anecdotal evidence and leaves a lot of room for other perspectives. Sorry I didn’t say that well enough.
In 2019, I was leaked a document circulating at the Centre for Effective Altruism, the central coordinating body of the EA movement. Some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for “Potential Expected Long-Term Instrumental Value.” It was to be used by CEA staff to score attendees of EA conferences, to generate a “database for tracking leads” and identify individuals who were likely to develop high “dedication” to EA — a list that was to be shared across CEA and the career consultancy 80,000 Hours. There were two separate tables, one to assess people who might donate money and one for people who might directly work for EA.
What I saw was clearly a draft. Under a table titled “crappy uncalibrated talent table,” someone had tried to assign relative scores to these dimensions. For example, a candidate with a normal IQ of 100 would be subtracted PELTIV points, because points could only be earned above an IQ of 120. Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
I want to add a couple important points from the Vox article that weren’t explicit in your comment.
-This proposal was discarded
-The professional field scores were not necessarily supposed to be measuring intelligence. PELTIV was intended to measure many different things. To me professional field fits more into the “value aligned” category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.
I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.
In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness.
Absolutely true that it was ultimately not used and AI safety is higher priority for leadership. But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas.
The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as “NPCs” (people who don’t ultimately matter) is completely unacceptable.
Edit: This statement is about my personal experience in the biggest EA AI safety hub. It’s not intended to be anything more than anecdotal evidence, and leaves plenty of room for other experiences. Thanks to others for pointing out this wasn’t clear.
I’m part of the AI-oriented community this part is referring to, and have felt a lot of pressure to abandon work on other cause areas to work on AI safety (which I have rejected). In my experience it is not condescending at all. They definitely do not consider people who work in other cause areas less smart or quantitative. They are passionate about their cause so these conversations come up, but the conversations are very respectful and display deep open-mindedness. A lot of the pressure is also not intentional but just comes from the fact that everyone around you is working on AI.
I think this is an empirical question, and likely varies between communities, so “definitely do not...” seems too strong. For example, here’s Gregory Lewis, a fairly senior and well-respected EA, commenting on different cause areas (emphasis added):
I wouldn’t be surprised if other people shared this view.
Thanks for sharing! Yeah I meant that only to refer to the people I know well enough to know their opinions and the general vibe I’ve gotten in the biggest EA AI safety hub. Mine is just anecdotal evidence and leaves a lot of room for other perspectives. Sorry I didn’t say that well enough.
Oh I see! My mistake, I misunderstood what you were referring to, thanks for clarifying!
Hi Sonia,
You may not have the whole picture.
Source: https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy
Thanks for sharing this important information!
I want to add a couple important points from the Vox article that weren’t explicit in your comment.
-This proposal was discarded
-The professional field scores were not necessarily supposed to be measuring intelligence. PELTIV was intended to measure many different things. To me professional field fits more into the “value aligned” category, although I respect that other interpretations based on other personal experiences with high status EAs could be just as valid.
I agree that work on AI safety is a higher priority for much of EA leadership than other cause areas now.
Absolutely true that it was ultimately not used and AI safety is higher priority for leadership. But proposals like this, especially by organizers of CEA, are definitely condescending and non-respectful and is not an appropriate way to treat fellow EAs working on climate change / poverty / animal welfare or other important cause areas.
The recent fixation of certain EAs on AI/ longtermism renders everything else less valuable in comparison and treating EAs not working on AI safety as “NPCs” (people who don’t ultimately matter) is completely unacceptable.
Yes, as I shared below, I intended my statement to be anecdotal evidence that leaves a lot of room for other perspectives.
Although the only NPCs quote I noticed was referring to something else. Can you share the NPCs quote that referred to what we are discussing?