Micheal, I like your blog and enjoyed the post.
I agree there are no good charities for hedonistic utilitarians at the moment, because they are either not very aligned with hedonistic utilitarian goals or their cost-effectiveness is not tractable. (You can still donate if you have so much money that your alternative spending would be “bigger car/yacht”, otherwise it doesn’t make much sense.)
Your ideas are all interesting, but values spreading and promoting universal eudaimonia are non-starters. You get downvoted on an EA forum, and you are not going to find a more open-minded amicable target group than this.
Happy animals are problematic because their feedback is limited; you don’t know when they are suffering unless you monitor them with unreasonable effort. Their minds are not optimized for high pleasure/low suffering. Perhaps with future technology this sort of thing will be trivial, but that is not certain and investing in the necessary research will give too much harmful knowledge to non-value-aligned people. Even if it were net good to fund such research, it will probably be done for other reasons anyway (commercial applications, publicly funded neurology etc.), so again it’s something you should only fund if you have too much money.
I don’t know enough about insect biology to judge humane insecticides; the idea is certainly not unrealistic. But remember real people would have to use it preferentially, so even if such a charity existed, there’s no guarantee anyone would use it instead of laughing you out of the room.
Lila, the future may not be controlled by a singleton but by a plurality of people implementing diverse values. But even if it is, the singleton may not maximize one value, but a mix of different values different people care about—a compromise “value handshake” as Scott Alexander called it.
Thus, it is best to emphasize you are not paperclip minimizers. The same goes for hedonium, unaided scientific insight, longevity, or art, to name just a few things some transhumanists value while others don’t.
There are two kinds of value conflicts: Ones where the values are merely orthogonal, and ones where values are either diametrically opposed or at least strongly negatively correlated in practice.
The orthogonal ones are still in conflict when limited resources are concerned—but not otherwise. It is much easier to find a compromise between them than between the opposed or practically negatively correlated ones.
There is no reason why a sigleton could not spend some resources on paperclips, some on hedonium, some on bigger happy minds, some on Fun, some on art, some on biodiversity, etc., if this increases the probability that people will compromise on letting the singleton come into being and be functional.