Good question! I share that intuition that preventing harm is a really good thing to do, and I find striking the right balance between self-sacrifice and pursuing my own interests difficult.
I think if you argue that that leads to anything close to a normal life you are being disingenuous
I think this is probably wrong for most people. If you make yourself unhappy by trying to force yourself to make sacrifices you don’t want to make, I think most people will be much less productive. And I think that most people actually need a fairly normal social life etc. to avoid that. I believe this because I’ve seen and heard stories of people burning out from trying to work too hard, and I’ve come close myself.
I think the best way to have a large impact probably looks like working as hard as you sustainably can (for most people, I think this is working hard in a normal 9-5 work week or less), and spending enough time thinking seriously about the best strategy for you to make the biggest difference. It might also involve donating money, but again I think it’s a good use of money to spend some money on what makes you happy, to prevent resentment and burn out.
Why does “lock-in” seem so unlikely to you?
One story:
Assume AI welfare matters
Aligned AI concentrates power in a small group of humans
AI technology allows them to dictate aspects of the future / cause some “lock in” if they want. That’s because:
These humans control the AI systems that have all the hard power in the world
Those AI systems will retain all the hard power indefinitely; their wishes cannot be subverted
Those AI systems will continue to obey whatever instructions they are given indefinitely
Those humans decide to dictate some or all of what the future looks like, and lots of AIs end up suffering in this future because their welfare isn’t considered by the decision makers.
(Also, the decision makers could pick a future which isn’t very good in other ways.)
You could imagine AI welfare work now improving things by putting AI welfare on the radar of those people, so they’re more likely to take AI welfare into account when making decisions.
I’d be interested in which step of this story seems implausible to you—is it about AI technology making “lock in” possible?