I am not trying to “win” anything. I am stating why MIRI is not transparent, and does not deal in scalable issues. As an individual, Earning to Give, it does not follow to fund such things under the guise of Effective Altruism. Existential risk is important to think about and discuss as individuals. However, funding CS grad students does not make sense in the light of Effective Altruism.
Funding does not increase “thinking.” The whole point of EA is to not give blindly. For example, giving food aid, although meaning well, can have a very negative effect (i.e., the crowding out effect on the local market). Nonmaleficence should be one’s initial position in regards to funding.
Lastly, no I rarely accept something as true first. I do not first accept the null hypothesis. “But there’s a whole load of arguments about why it is a tractable field”—What are they? Again, none of the actual arguments were examined: How is MIRI going about tractable/solvable issues? Who of MIRI is getting the funds? How is time travel safety not as relevant as AI safety?
the primary problem being the lack of transparency on the side of Open Phil. concerning the evaluative criteria used in their decision to award MIRI with an extremely huge grant.
I am not trying to “win” anything. I am stating why MIRI is not transparent, and does not deal in scalable issues. As an individual, Earning to Give, it does not follow to fund such things under the guise of Effective Altruism. Existential risk is important to think about and discuss as individuals. However, funding CS grad students does not make sense in the light of Effective Altruism.
Funding does not increase “thinking.” The whole point of EA is to not give blindly. For example, giving food aid, although meaning well, can have a very negative effect (i.e., the crowding out effect on the local market). Nonmaleficence should be one’s initial position in regards to funding.
Lastly, no I rarely accept something as true first. I do not first accept the null hypothesis. “But there’s a whole load of arguments about why it is a tractable field”—What are they? Again, none of the actual arguments were examined: How is MIRI going about tractable/solvable issues? Who of MIRI is getting the funds? How is time travel safety not as relevant as AI safety?
Thanks for this discussion, which I find quite interesting. I think the effectiveness and efficiency of funding research projects concerning risks of AI is a largely neglected topic. I’ve posted some concerns on this below an older thread on MIRI: http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/dce
the primary problem being the lack of transparency on the side of Open Phil. concerning the evaluative criteria used in their decision to award MIRI with an extremely huge grant.