Great stuff, Eric. Your input seems as valuable to consider as the OP itself is. I agree.
Helping direct animal charities is important, but I believe it is far more important to continue working on research instead.
If you replace [animal] with [GCR reduction] or [x-risk reduction] in this sentence, I suspect the same is true for that cause, though I’m not personally enmeshed in the field enough to provide as good examples as you have for animal charity. This is why I currently favor increased research output from the Global Catastrophic Risks Institute, the Open Philanthropy Project, and/or the Future of LIfe Institute rather than, say, the Machine Intelligence Research Institute. I really need to look into this more, though, as you have for animal charities.
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.
Great stuff, Eric. Your input seems as valuable to consider as the OP itself is. I agree.
If you replace [animal] with [GCR reduction] or [x-risk reduction] in this sentence, I suspect the same is true for that cause, though I’m not personally enmeshed in the field enough to provide as good examples as you have for animal charity. This is why I currently favor increased research output from the Global Catastrophic Risks Institute, the Open Philanthropy Project, and/or the Future of LIfe Institute rather than, say, the Machine Intelligence Research Institute. I really need to look into this more, though, as you have for animal charities.
MIRI primarily does research too though. Do you mean you prefer to support cause prioritization research?
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.