Hi John, I don’t have any concrete links, but I’d start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn’t even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.
Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board, to existing problems related to climate changes etc. While I wouldn’t necessarily prioritize these causes to future-oriented charities (say, climate related research), it is worth keeping in mind that investing in the reduction of the existing suffering may have an impact on the reduction of future suffering as well (e.g. by increasing the number of vegans we may impact the ethics of human diet in the future). The impact of such changes is much easier to assess than the impact of the research in an area that concerns risks which are extremely hard to predict. Hence, I don’t think the research on AI risks is futile—not at all—I just find it important to have a clear assessment criteria, just like in any other domain of science, as for what counts as effective and efficient research strategy, how are future assessments of the currently funded projects going to proceed (in order to determine how much has been done within these projects and whether a different approach would be better), whether the given cause is already sufficiently funded in comparison to other causes, etc.
Hi John, I don’t have any concrete links, but I’d start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn’t even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.
Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board, to existing problems related to climate changes etc. While I wouldn’t necessarily prioritize these causes to future-oriented charities (say, climate related research), it is worth keeping in mind that investing in the reduction of the existing suffering may have an impact on the reduction of future suffering as well (e.g. by increasing the number of vegans we may impact the ethics of human diet in the future). The impact of such changes is much easier to assess than the impact of the research in an area that concerns risks which are extremely hard to predict. Hence, I don’t think the research on AI risks is futile—not at all—I just find it important to have a clear assessment criteria, just like in any other domain of science, as for what counts as effective and efficient research strategy, how are future assessments of the currently funded projects going to proceed (in order to determine how much has been done within these projects and whether a different approach would be better), whether the given cause is already sufficiently funded in comparison to other causes, etc.