Thank you for this. I love Star Trek Discovery too.
Something I’m a bit puzzled by—what would we do to make benevolent AI over and above what we already do? We already train LLMs to do things we (for a given value of “we”) find useful and not useless. But a plan for a benevolent AI must require more than that. If you had to sketch out how to teach an AI benevolence, how would you do it?
Thank you for this. I love Star Trek Discovery too.
Something I’m a bit puzzled by—what would we do to make benevolent AI over and above what we already do? We already train LLMs to do things we (for a given value of “we”) find useful and not useless. But a plan for a benevolent AI must require more than that. If you had to sketch out how to teach an AI benevolence, how would you do it?