I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.
I think people new to EA not knowing a lot about specific cause areas they’re excited about isn’t more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you’d get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models.
This is absolutely the case for global health and development. Development is really complicated, and I think EAs tend to vastly overrate just how how certain we are about what works the best.
When I began working full time in the space, I spent about the first six months getting continuously smacked in the face by just how much there is to know, and how little of it I knew.
I think introductory EA courses can do better at getting people to dig deep. For example, I don’t think its unreasonable to have attendees actually go through a CEA by Givewell and discuss the many key assumptions that are made. For a workshop I did recently for a Danish high school talent programme, we created simplified versions which they had no trouble engaging with.
I think people new to EA not knowing a lot about specific cause areas they’re excited about isn’t more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you’d get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models.
This is absolutely the case for global health and development. Development is really complicated, and I think EAs tend to vastly overrate just how how certain we are about what works the best.
When I began working full time in the space, I spent about the first six months getting continuously smacked in the face by just how much there is to know, and how little of it I knew.
I think introductory EA courses can do better at getting people to dig deep. For example, I don’t think its unreasonable to have attendees actually go through a CEA by Givewell and discuss the many key assumptions that are made. For a workshop I did recently for a Danish high school talent programme, we created simplified versions which they had no trouble engaging with.