Regarding your point (2), couldn’t this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?
I don’t think I’m following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions?
Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.
I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.
This does seem to assume that absolute size in difference is more important than proportions. I’m not sure how to think about whether or not this is the case.
This is a tacit claim about the shape of the search space, granted a reasonable one given most search spaces show decreasing marginal utility. Some search spaces have threshold effects or other features that make them have increasing marginal utility per resources spent, at least in some localized areas. AI is weird enough this seems worth thinking about.
Regarding your point (2), couldn’t this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?
I don’t think I’m following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions?
Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.
I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.
This does seem to assume that absolute size in difference is more important than proportions. I’m not sure how to think about whether or not this is the case.
This is a tacit claim about the shape of the search space, granted a reasonable one given most search spaces show decreasing marginal utility. Some search spaces have threshold effects or other features that make them have increasing marginal utility per resources spent, at least in some localized areas. AI is weird enough this seems worth thinking about.