I do my work at Open Phil — funding both AIS and EA capacity-building — because I’m motivated by EA. I started working on this in 2020, a time when there were way fewer concrete proposals for what to do about averting catastrophic AI risks & way fewer active workstreams. It felt like EA was necessary just to get people thinking about these issues. Now the catastrophic AI risks field is much larger and somewhat more developed, as you point out. And so much the better for the world!
But it seems so far from the case that EA-style thinking is “done” with regard to TAI. This would mean we’ve uncovered every new consideration & workstream that could/should be worked on in the years before we are obsoleted by AIs. This sounds so unlikely given how huge and confusing the TAI transition would be.
EAs are characteristic in their moral focus plus their flexibility in what they work on. I like your phrasing here of “constantly up for re-negotiation,” which imo names a distinctively EA trait. To add to your list, I think EA-style thought is also characteristic in its ambition and focus on the truth (even in very confusing/contentious domains). I think EAs in the AI safety field are still person-for-person outperforming, e.g. in founding new helpful AI safety research agendas. And I think our success is in large part due to the characteristics I mention above. This seems like a pretty robust dynamic so I expect it to continue at least in the medium term.
(And overall, my guess is distinctively EA characteristics will become more important as the project of TAI preparation becomes more multifaceted and confusing.)
Thanks for writing this Arden! I strong upvoted.
I do my work at Open Phil — funding both AIS and EA capacity-building — because I’m motivated by EA. I started working on this in 2020, a time when there were way fewer concrete proposals for what to do about averting catastrophic AI risks & way fewer active workstreams. It felt like EA was necessary just to get people thinking about these issues. Now the catastrophic AI risks field is much larger and somewhat more developed, as you point out. And so much the better for the world!
But it seems so far from the case that EA-style thinking is “done” with regard to TAI. This would mean we’ve uncovered every new consideration & workstream that could/should be worked on in the years before we are obsoleted by AIs. This sounds so unlikely given how huge and confusing the TAI transition would be.
EAs are characteristic in their moral focus plus their flexibility in what they work on. I like your phrasing here of “constantly up for re-negotiation,” which imo names a distinctively EA trait. To add to your list, I think EA-style thought is also characteristic in its ambition and focus on the truth (even in very confusing/contentious domains). I think EAs in the AI safety field are still person-for-person outperforming, e.g. in founding new helpful AI safety research agendas. And I think our success is in large part due to the characteristics I mention above. This seems like a pretty robust dynamic so I expect it to continue at least in the medium term.
(And overall, my guess is distinctively EA characteristics will become more important as the project of TAI preparation becomes more multifaceted and confusing.)