A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.
I kinda don’t think this was the best use of a million dollars, but I can see the argument for how it might be.
I got a one-time gift of appreciated crypto, not through MIRI, part of whose purpose as I understood it was to give me enough of a savings backstop (having in previous years been not paid very much at all) that I would feel freer to speak my mind or change my mind should the need arise.
I have of course already changed MIRI’s public mission sharply on two occasions, the first being when I realized in 2001 that alignment might need to be a thing, and said so to the primary financial supporter who’d previously supported MIRI (then SIAI) on the premise of charging straight ahead on AI capabilities; the second being in the early 2020s when I declared publicly that I did not think alignment technical work was going to complete in time and MIRI was mostly shifting over to warning the world of that rather than continuing to run workshops. Should I need to pivot a third time, history suggests that I would not be out of a job.
If I had Eliezer’s views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
I kinda don’t think this was the best use of a million dollars, but I can see the argument for how it might be.
I got a one-time gift of appreciated crypto, not through MIRI, part of whose purpose as I understood it was to give me enough of a savings backstop (having in previous years been not paid very much at all) that I would feel freer to speak my mind or change my mind should the need arise.
I have of course already changed MIRI’s public mission sharply on two occasions, the first being when I realized in 2001 that alignment might need to be a thing, and said so to the primary financial supporter who’d previously supported MIRI (then SIAI) on the premise of charging straight ahead on AI capabilities; the second being in the early 2020s when I declared publicly that I did not think alignment technical work was going to complete in time and MIRI was mostly shifting over to warning the world of that rather than continuing to run workshops. Should I need to pivot a third time, history suggests that I would not be out of a job.
If I had Eliezer’s views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
100 percent agree. I was going to write something similar but this is better