Great comment. However, lots of people concerned with safety have quit OpenAI. Wouldn’t you expect them to continue working at OpenAI if they thought your argument was correct?
BTW, there might be ways to achieve slowdown through methods other than traditional activism. This could buy valuable time for alignment work without hurting the “AI alignment” brand. For example:
Great comment. However, lots of people concerned with safety have quit OpenAI. Wouldn’t you expect them to continue working at OpenAI if they thought your argument was correct?
BTW, there might be ways to achieve slowdown through methods other than traditional activism. This could buy valuable time for alignment work without hurting the “AI alignment” brand. For example:
Commoditize LLMs to reduce the incentive for large training runs
Point out to NVidia AI is on track to kill their company
(I encourage people to signal boost these ideas if they seem good. No attribution necessary.)