Even if most examples are unrelated to EA, if it’s true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?
EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.
I think where it relates to EA is our worry about the future of complex life. If transformative superintelligence is developed in a morally bankrupt environment, will that create value-aligned AI?
Even if most examples are unrelated to EA, if it’s true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?
EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.
I think where it relates to EA is our worry about the future of complex life. If transformative superintelligence is developed in a morally bankrupt environment, will that create value-aligned AI?