I believe this is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say “actually AI risk isn’t a big deal” and shut down MIRI without putting himself in a difficult financial situation.
Thanks for sharing, Michael. If I was as concerned about AI risk as @EliezerYudkowsky, I would use practically all the additional earnings (e.g. above Nate’s 235 k$/year; in reality I would keep much less) to support efforts to decrease it. I would believe spending more money on personal consumption or investments would just increase AI risk relative to supporting the most cost-effective efforts to decrease it.
A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.
I kinda don’t think this was the best use of a million dollars, but I can see the argument for how it might be.
If I had Eliezer’s views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
I believe this is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say “actually AI risk isn’t a big deal” and shut down MIRI without putting himself in a difficult financial situation.
Thanks for sharing, Michael. If I was as concerned about AI risk as @EliezerYudkowsky, I would use practically all the additional earnings (e.g. above Nate’s 235 k$/year; in reality I would keep much less) to support efforts to decrease it. I would believe spending more money on personal consumption or investments would just increase AI risk relative to supporting the most cost-effective efforts to decrease it.
A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
I kinda don’t think this was the best use of a million dollars, but I can see the argument for how it might be.
If I had Eliezer’s views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
100 percent agree. I was going to write something similar but this is better