I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.
A few related thoughts we wanted to share:
The figure ânearly half a billion dollarsâ accurately describes our total spending in AI over the last eight years, if you think of EA and existential risk work as being somewhat but not entirely focused on AI â which seems fair. However, only a small fraction of that funding (under 5%) went toward student-oriented activities like groups and courses.
The article cites the figure âas much as $80,000 a yearâ for what student leaders might receive. This figure is prorated: a student who takes a gap year to work full-time on organizing, in an expensive city, might get up to $80,000, but most of our grants to organizers are for much lower amounts.
Likewise, while the figure âup to $100,000â is mentioned for group expenses, nearly all of the groups we fund have much lower budgets.
This rundown, shared with us by one American organizer, is a good example of a typical budget: ~$2800 for food at events, ~$1500 for books and other reading material, ~$200 for digital services (e.g. Zoom), and ~$3000 for the groupâs annual retreat.
Regarding the idea, mentioned in the story, that AI safety is a distraction from present-day harms like algorithmic bias:
As Mike said in the piece, we think present-day harms deserve a robust response.
But just as concerns about catastrophic climate change scenarios like large-scale sea level rise are not seen as distractions from present-day climate harms like adverse weather events, we donât think concerns about catastrophic AI harms distract from concerns about present-day harms.
In fact, they can be mutually reinforcing. Harms like algorithmic bias are caused in part by the difficulty of getting AI systems to behave as their designers intend, which is the same thing that could lead to more extreme harms. Some of the same guardrails may work for everything on that continuum. In that sense, researchers and advocates working on AI safety and AI ethics are pulling in the same direction: toward policies and guardrails that protect society from these novel and growing threats.
We also want to express that we are very excited by the work of groups and organizers weâve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and itâs encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that todayâs undergraduates and graduate students will become tomorrowâs researchers, governance experts, and advocates for safer systems.
For a few examples of what students and academics in the article are working on, we recommend:
I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.
A few related thoughts we wanted to share:
The figure ânearly half a billion dollarsâ accurately describes our total spending in AI over the last eight years, if you think of EA and existential risk work as being somewhat but not entirely focused on AI â which seems fair. However, only a small fraction of that funding (under 5%) went toward student-oriented activities like groups and courses.
The article cites the figure âas much as $80,000 a yearâ for what student leaders might receive. This figure is prorated: a student who takes a gap year to work full-time on organizing, in an expensive city, might get up to $80,000, but most of our grants to organizers are for much lower amounts.
Likewise, while the figure âup to $100,000â is mentioned for group expenses, nearly all of the groups we fund have much lower budgets.
This rundown, shared with us by one American organizer, is a good example of a typical budget: ~$2800 for food at events, ~$1500 for books and other reading material, ~$200 for digital services (e.g. Zoom), and ~$3000 for the groupâs annual retreat.
Regarding the idea, mentioned in the story, that AI safety is a distraction from present-day harms like algorithmic bias:
As Mike said in the piece, we think present-day harms deserve a robust response.
But just as concerns about catastrophic climate change scenarios like large-scale sea level rise are not seen as distractions from present-day climate harms like adverse weather events, we donât think concerns about catastrophic AI harms distract from concerns about present-day harms.
In fact, they can be mutually reinforcing. Harms like algorithmic bias are caused in part by the difficulty of getting AI systems to behave as their designers intend, which is the same thing that could lead to more extreme harms. Some of the same guardrails may work for everything on that continuum. In that sense, researchers and advocates working on AI safety and AI ethics are pulling in the same direction: toward policies and guardrails that protect society from these novel and growing threats.
We also want to express that we are very excited by the work of groups and organizers weâve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and itâs encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that todayâs undergraduates and graduate students will become tomorrowâs researchers, governance experts, and advocates for safer systems.
For a few examples of what students and academics in the article are working on, we recommend:
The websites of the Stanford Existential Risks Initiative and Stanford AI Alignment group.
The âIntro to AI Alignmentâ syllabus put together by SAIA.
The âAI Safety Fundamentalsâ curriculum mentioned in the article (which isnât exclusive to students, and is available for anyone to read!).
Talks from the 2022 Stanford Existential Risks Conference, run by a coalition of groups (including SERI).