Yeah the example above with choosing to not get promoted or not recieve funding is a more realistic scenario.
I agree these situations are somewhat rare in practice.
Re. AI Safety, my point was that these situations are especially rare there (among people who agree it’s a problem, which is about states of knowledge anyway, not about goals)
Thanks for this post, I think it’s a good discussion.
Yeah the example above with choosing to not get promoted or not recieve funding is a more realistic scenario.
I agree these situations are somewhat rare in practice.
Re. AI Safety, my point was that these situations are especially rare there (among people who agree it’s a problem, which is about states of knowledge anyway, not about goals)
Thanks for this post, I think it’s a good discussion.