This is insufficiently meta. Consider that this very simple and vague payout scheme is probably not optimal for encouraging good bounty suggestions. I suggest going one level up and putting out a bounty for the optimal incentive structure of bounty bounties. A bounty bounty bounty, if you will.
(This is mostly a joke, but I’m not adverse to getting paid if you actually decide to do it)
Edit: now that I’ve thought about it more, something in this space is probably worthwhile. A “bounty bounty bounty” is, funnily enough, both too specific and too abstract. However, a general “bounty on optimal bounty schemes” may be very valuable. A detailed investigation into the optimal bounty payouts for different goals, how best to split bounties among multiple participants, how best to score proposals, etc are all important questions for bounty construction. A bounty to answer such questions makes sense.
I don’t actually think outcome 3 is achievable or particularly desirable. You’re basically asking for an AI that relentlessly cuts any non-optimal resource expenditure in favor of more and more strongly optimizing for the “good”. I think the default result of such a process is that it finds some configuration of matter which is more “happy” / “meaningful” / whatever it’s conception of “good” and sacrifices everything that’s not part of such a conception.
I also don’t think our values are shaped like that. I think a single human’s values derive from a multi agent negotiation among a continuous distribution over possible internal sub agents. This means they’re inherently dynamic, constantly changing in response to your changing cognitive environment. It also means that we essentially have a limitless variety of internal values, whose external expression is limited by our finite resources/ capabilities. Restricting the future’s values to a single, limited snapshot of that process just seems… not good.