Thanks, I’m looking forward to this! Some questions that seem worth considering to me are:
1. Is AGI likely to lock in values? (if so it’s probably bad for animals) 2. Is the answer to this question even knowable? (a lot of what I’ve heard on the topic has been like “AI could mean X but also not X”) 3. If AGI is good/bad, how steerable is it? (e.g. maybe making sure that AGI goes well for humans is actually much easier)
Thanks, I’m looking forward to this! Some questions that seem worth considering to me are:
1. Is AGI likely to lock in values? (if so it’s probably bad for animals)
2. Is the answer to this question even knowable? (a lot of what I’ve heard on the topic has been like “AI could mean X but also not X”)
3. If AGI is good/bad, how steerable is it? (e.g. maybe making sure that AGI goes well for humans is actually much easier)