Currently working with CEA with their CBG program coordinating AI Alignment field building in India
You can find more here—https://docs.google.com/document/d/1ZvDyk_XzDPMxiPdL_Lxu4egP1Ztvaa_WzHBwvgPLgMg/edit?tab=t.0
Aditya Arpitha Prasad
Karma: 3
Shaping effective altruism: Lightning talks on community and field building
Now that Jan Leike has left, superalignment team has been disbanded, OAI has really lost most of the safety focused researchers.
I think this is a very important point,
> if aligning AI with human values has historically resulted in catastrophic outcomes, how can we ensure that AI alignment will not amplify the very harms we aim to prevent?
We have been putting our money, time, and trust behind even more on our current human values being the way out even as empirically we see how our actions have been power-seeking, we have been responsible for the largest extinction event killing all our cousin species and we are doubling down on technology, more extraction of value from nature.
It is an S risk to imagine AI systems instilled with these “human” values