I take the point. This is a potential outcome, and I see the apprehension, but I think it’s a probably a low risk that users will grow to mistake robotics and hardware accidents for AI accidents (and work that mitigates each) - sufficiently low that I’d argue expected value favours the accident frame. Of course, I recognize that I’m probably invested in that direction.
I would do some research onto how well sciences that have suffered brand dilution do.
As far as I understand it Research institutions have high incentives to
Find funding
Pump out tractible digestible papers
See this kind of article for other worries about this kind of thing.
You have to frame things with that in mind, give incentives so that people do the hard stuff and can be recognized for doing the hard stuff.
Nanotech is a classic case of a diluted research path, if you have contacts maybe try and talk to Erik Drexler, he is interested in AI safety so might be interested in how the AI Safety research is framed.
I think this steers close to an older debate on AI “safety” vs “control” vs “alignment”. I wasn’t a member of that discussion so am hesitant to reenact concluded debates (I’ve found it difficult to find resources on that topic other than what I’ve linked—I’d be grateful to be directed to more). I personally disfavour ‘motivation’ on grounds of risk of anthropomorphism.
Fair enough I’m not wedded to motivation (I see animals having motivation as well, so not strictly human). It doesn’t seem to cover Phototaxis which seems like the simplest thing we want to worry about. So that is an argument against motivation. I’m worded out at the moment. I’ll see if my brain thinks of anything better in a bit.
I would do some research onto how well sciences that have suffered brand dilution do.
As far as I understand it Research institutions have high incentives to
Find funding
Pump out tractible digestible papers
See this kind of article for other worries about this kind of thing.
You have to frame things with that in mind, give incentives so that people do the hard stuff and can be recognized for doing the hard stuff.
Nanotech is a classic case of a diluted research path, if you have contacts maybe try and talk to Erik Drexler, he is interested in AI safety so might be interested in how the AI Safety research is framed.
Fair enough I’m not wedded to motivation (I see animals having motivation as well, so not strictly human). It doesn’t seem to cover Phototaxis which seems like the simplest thing we want to worry about. So that is an argument against motivation. I’m worded out at the moment. I’ll see if my brain thinks of anything better in a bit.