Sorry I’m a bit late to the party on this, but thanks for the well-researched and well thought-out post.
My two cents, as this line caught my eye:
Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
I think the line of reasoning concerns me. If working on racial/gender bias from AI is one of the most cost-effective ways to make people happier or save lives, then I would advocate this line of reasoning, but I doubt this is the case.
Rather, if the arguments for working on AI as an X-risk aren’t convincing enough on its own, it seems this would be enough to re-consider whether we want to work on AI.
Alternatively, the racial/gender bias angle could be used more for optics, rather than truly being the rationale behind working on AI. While it’s possible this would bring more people on board, there are risks associated with hiding what you really think (see section “Longtermism vs X-risk” of this podcast for discussion on the issue—Will Macaskill notes “I think it’s really important to convey what you believe and why”).
Sorry I’m a bit late to the party on this, but thanks for the well-researched and well thought-out post.
My two cents, as this line caught my eye:
Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
I think the line of reasoning concerns me. If working on racial/gender bias from AI is one of the most cost-effective ways to make people happier or save lives, then I would advocate this line of reasoning, but I doubt this is the case.
Rather, if the arguments for working on AI as an X-risk aren’t convincing enough on its own, it seems this would be enough to re-consider whether we want to work on AI.
Alternatively, the racial/gender bias angle could be used more for optics, rather than truly being the rationale behind working on AI. While it’s possible this would bring more people on board, there are risks associated with hiding what you really think (see section “Longtermism vs X-risk” of this podcast for discussion on the issue—Will Macaskill notes “I think it’s really important to convey what you believe and why”).