Thanks for writing this up, it’s an interesting frame.
Is “question versus answer based” just the same as “does cause prioritization or not”? It seems to me like AI X-Risk and animal welfare has a bunch of questions, and effective giving has a bunch of answers; the major difference I feel like you are pointing to is just that the former is (definitionally) not prioritizing between causes and the latter is. (Whereas conversely the former is e.g. prioritizing between paths to impact whereas the latter isn’t.)
Thanks for writing this up, it’s an interesting frame.
Is “question versus answer based” just the same as “does cause prioritization or not”? It seems to me like AI X-Risk and animal welfare has a bunch of questions, and effective giving has a bunch of answers; the major difference I feel like you are pointing to is just that the former is (definitionally) not prioritizing between causes and the latter is. (Whereas conversely the former is e.g. prioritizing between paths to impact whereas the latter isn’t.)