These are both good points worth addressing! My understanding on (2) is that any proposed method of slowing down AGI research would likely antagonize the majority of AI researchers with relatively little actual slowdown. It seems more valuable to build alliances with current AI researchers, and get them to care about safety, in order to increase the amount of safety-concerned research done vs. safety-agnostic research.
Exactly. If someone were trying to slow down AI research, they definitely wouldn’t want to make it publicly known that they were doing so, and they wouldn’t write articles on a public forum about how they believe we should try to slow down AI research.
AI researchers don’t like it when you try to slow down AI research. AI researchers are a lot more powerful than AI safety supporters. Right now AI researchers’ opinions on AI safety range from “this is stupid but I don’t care” to “this is really important, let’s keep doing AI research though.” If it becomes widely known that you’re trying to slow down AI research in the name of AI safety, AI researchers’ opinions will shift to “this is stupid and I care a lot because these stupid idiots are trying to stop me from doing research” and “I used to think this was important but clearly these people are out to get me so I’m not going to support them anymore.”
Maybe not a great analogy, but suppose you’re living under an oppressive totalitarian regime. You think it would be super effective to topple the regime. So you go around telling people, “Hey guys I think we should try to topple this regime, I think things would be a lot better. It’s weird that I don’t see people going around talking about toppling this regime, people should talk about it more.” Then they arrest you and throw you into a gulag. Now you know why people don’t go around talking about it.
1) It’s not obvious how the speed of AI affects global risk and sustainability. E.g. getting to powerful AI faster through more AI research would reduce the time spent exposed to various state risk. It would also reduce the amount of computing hardware at the time which could make for a less disruptive transition. If you think the odds are 60:40 that one direction is better than the other (with equal magnitudes), then you get a fifth of the impact.
2) AI research overall is huge relative to work focused particularly on AI safety, by orders of magnitude. So the marginal impact of a change in research effort is much greater for the latter. Combined with the first point it looks at least hundreds of times more effective to address safety rather than to speed up or slow down software progress with given resources, and not at all worthwhile to risk the former for the latter.
3) AI researchers aren’t some kind of ogres or tyrants: they are smart scientists and engineers with a lot of awareness of uninformed and destructive technophobia (consider GMO crops, opposition to using gene drives to wipe out malaria, opposition to the industrial revolution, panics about books/film/cars/video games, anti-vaccine movements, anti-nuclear). And they are very aware of the large benefits their work could produce. There actually is a very foolish technophobic response to AI that doesn’t care about the immense benefits, one that it is important not to be confused with (and that it is understandable that people might confuse with someone like Bostrom who has written a lot about the great benefits of AI and that its expected value is good).
4) If you’re that worried about the dangers of offending people (some of whose families may have fled the Soviet Union, and other places with gulags), don’t make needlessly offensive analogies about them. It is AI researchers who will solve the problems of AI safety.
Regarding your point (2), couldn’t this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?
I don’t think I’m following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions?
Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.
I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.
This does seem to assume that absolute size in difference is more important than proportions. I’m not sure how to think about whether or not this is the case.
This is a tacit claim about the shape of the search space, granted a reasonable one given most search spaces show decreasing marginal utility. Some search spaces have threshold effects or other features that make them have increasing marginal utility per resources spent, at least in some localized areas. AI is weird enough this seems worth thinking about.
These are both good points worth addressing! My understanding on (2) is that any proposed method of slowing down AGI research would likely antagonize the majority of AI researchers with relatively little actual slowdown. It seems more valuable to build alliances with current AI researchers, and get them to care about safety, in order to increase the amount of safety-concerned research done vs. safety-agnostic research.
Exactly. If someone were trying to slow down AI research, they definitely wouldn’t want to make it publicly known that they were doing so, and they wouldn’t write articles on a public forum about how they believe we should try to slow down AI research.
AI researchers don’t like it when you try to slow down AI research. AI researchers are a lot more powerful than AI safety supporters. Right now AI researchers’ opinions on AI safety range from “this is stupid but I don’t care” to “this is really important, let’s keep doing AI research though.” If it becomes widely known that you’re trying to slow down AI research in the name of AI safety, AI researchers’ opinions will shift to “this is stupid and I care a lot because these stupid idiots are trying to stop me from doing research” and “I used to think this was important but clearly these people are out to get me so I’m not going to support them anymore.”
Maybe not a great analogy, but suppose you’re living under an oppressive totalitarian regime. You think it would be super effective to topple the regime. So you go around telling people, “Hey guys I think we should try to topple this regime, I think things would be a lot better. It’s weird that I don’t see people going around talking about toppling this regime, people should talk about it more.” Then they arrest you and throw you into a gulag. Now you know why people don’t go around talking about it.
Indeed.
1) It’s not obvious how the speed of AI affects global risk and sustainability. E.g. getting to powerful AI faster through more AI research would reduce the time spent exposed to various state risk. It would also reduce the amount of computing hardware at the time which could make for a less disruptive transition. If you think the odds are 60:40 that one direction is better than the other (with equal magnitudes), then you get a fifth of the impact.
2) AI research overall is huge relative to work focused particularly on AI safety, by orders of magnitude. So the marginal impact of a change in research effort is much greater for the latter. Combined with the first point it looks at least hundreds of times more effective to address safety rather than to speed up or slow down software progress with given resources, and not at all worthwhile to risk the former for the latter.
3) AI researchers aren’t some kind of ogres or tyrants: they are smart scientists and engineers with a lot of awareness of uninformed and destructive technophobia (consider GMO crops, opposition to using gene drives to wipe out malaria, opposition to the industrial revolution, panics about books/film/cars/video games, anti-vaccine movements, anti-nuclear). And they are very aware of the large benefits their work could produce. There actually is a very foolish technophobic response to AI that doesn’t care about the immense benefits, one that it is important not to be confused with (and that it is understandable that people might confuse with someone like Bostrom who has written a lot about the great benefits of AI and that its expected value is good).
4) If you’re that worried about the dangers of offending people (some of whose families may have fled the Soviet Union, and other places with gulags), don’t make needlessly offensive analogies about them. It is AI researchers who will solve the problems of AI safety.
Regarding your point (2), couldn’t this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?
I don’t think I’m following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions?
Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.
I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.
This does seem to assume that absolute size in difference is more important than proportions. I’m not sure how to think about whether or not this is the case.
This is a tacit claim about the shape of the search space, granted a reasonable one given most search spaces show decreasing marginal utility. Some search spaces have threshold effects or other features that make them have increasing marginal utility per resources spent, at least in some localized areas. AI is weird enough this seems worth thinking about.