“A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).”—that’s not how human psychology works.
“A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).”—that’s not how human psychology works.
How easy is it for an EA to overcome that?
Also, if there’s a motivation—impact trade-off, how can we navigate that?