This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I’m not sure if that choice has held up.
Come 2027, I’d love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there’s a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.
People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.
Thanks for writing this!!
This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I’m not sure if that choice has held up.
Come 2027, I’d love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there’s a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.
People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.