Concerning x-risks, my personal point of disagreement with the community is that I feel more skeptical of the chances to optimize our influence on the long-term future “in the dark” than what seems to be the norm. By “in the dark”, I mean in the absence of concrete short-term feedback loops. For instance, when I see the sort of things that MIRI is doing, my instinctive reaction is to want to roll my eyes (I’m not an AI specialist, but I work as a researcher in an academic field that is not too distant). The funny thing is that I can totally see myself from 10 years ago siding with “the optimists”, but with time I came to appreciate more the difficulty of making anything really happen. Because of this I feel more sympathetic to causes in which you can measure incremental progress, such as (but not restricted to) climate change.
Often times climate change is dismissed on the basis that there is already a lot of money going into this. But it’s not clear to me that this proves the point. For instance, it may well be that these large resources that are being deployed are poorly directed. Some effort to reallocate these resources could have a tremendously large effect. (E.g. supporting the Clean Air Task Force, as suggested by the Founders Pledge, may be of very high impact, especially in these times of heavy state intervention, and of coming elections in the US.) We should be careful to apply the “Importance-Neglectedness-Tractability” framework with caution. In the last analysis, what matters is the impact of our best possible action, which may not be small just on the basis of “there is already a lot of money going into this”. (And, for the record, I would personally rate AI safety technical research as having very low tractability, but I think it’s good that some people are working on it.)
Concerning x-risks, my personal point of disagreement with the community is that I feel more skeptical of the chances to optimize our influence on the long-term future “in the dark” than what seems to be the norm. By “in the dark”, I mean in the absence of concrete short-term feedback loops. For instance, when I see the sort of things that MIRI is doing, my instinctive reaction is to want to roll my eyes (I’m not an AI specialist, but I work as a researcher in an academic field that is not too distant). The funny thing is that I can totally see myself from 10 years ago siding with “the optimists”, but with time I came to appreciate more the difficulty of making anything really happen. Because of this I feel more sympathetic to causes in which you can measure incremental progress, such as (but not restricted to) climate change.
Often times climate change is dismissed on the basis that there is already a lot of money going into this. But it’s not clear to me that this proves the point. For instance, it may well be that these large resources that are being deployed are poorly directed. Some effort to reallocate these resources could have a tremendously large effect. (E.g. supporting the Clean Air Task Force, as suggested by the Founders Pledge, may be of very high impact, especially in these times of heavy state intervention, and of coming elections in the US.) We should be careful to apply the “Importance-Neglectedness-Tractability” framework with caution. In the last analysis, what matters is the impact of our best possible action, which may not be small just on the basis of “there is already a lot of money going into this”. (And, for the record, I would personally rate AI safety technical research as having very low tractability, but I think it’s good that some people are working on it.)