Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I’m skeptical and I haven’t seen this written up.
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)
Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I’m skeptical and I haven’t seen this written up.
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)