(1) A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).
(2) Research around whether to donate $100K or $10K (or how to donate $100K conditional on winning the lottery) would be useful.
“A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).”—that’s not how human psychology works.
That would defeat the purpose of the project. I think that the purpose is to spur research and the money is there for extra encouragement.
I don’t think that’s true for two reasons:
(1) A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).
(2) Research around whether to donate $100K or $10K (or how to donate $100K conditional on winning the lottery) would be useful.
“A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).”—that’s not how human psychology works.
How easy is it for an EA to overcome that?
Also, if there’s a motivation—impact trade-off, how can we navigate that?