I got a one-time gift of appreciated crypto, not through MIRI, part of whose purpose as I understood it was to give me enough of a savings backstop (having in previous years been not paid very much at all) that I would feel freer to speak my mind or change my mind should the need arise.
I have of course already changed MIRI’s public mission sharply on two occasions, the first being when I realized in 2001 that alignment might need to be a thing, and said so to the primary financial supporter who’d previously supported MIRI (then SIAI) on the premise of charging straight ahead on AI capabilities; the second being in the early 2020s when I declared publicly that I did not think alignment technical work was going to complete in time and MIRI was mostly shifting over to warning the world of that rather than continuing to run workshops. Should I need to pivot a third time, history suggests that I would not be out of a job.
All of the difficulty here is having the sign of your impact be positive. It’s very hard to end up neutral; eg, if your work is just nonsense, it’s negative because it’s a distraction and attention sink. And it’s quite easy to end up negative, for example, if you exaggerate the impact of your work and provide more feed into the hopium ecosystem that desperately touts up any sign of progress.
When you mess with AI, whatever you do will outweigh any other impacts of your life, of course, obviously. It’s having the sign end up positive that is the hard part.