My experience from a big tech company: ML people are too deep in the technical and practical everyday issues that they don’t have the capacity (nor incentive) to form their own ideas about the further future.
I’ve heard people say, that it’s so hard to make ML do something meaningful that they just can’t imagine it would do something like recursive self-improvement. AI safety in these terms means making sure the ML model performs as well in deployment as in the development.
Another trend I noticed, but I don’t have much data for it, is that the somewhat older generation (35+) is mostly interested in the technical problems and don’t feel that much responsibility for how the results are used. Vs. the generation of 25 − 35 care much more about the future. I’m noticing similarities with climate change awareness, although the generation delimitations might differ.
My experience from a big tech company: ML people are too deep in the technical and practical everyday issues that they don’t have the capacity (nor incentive) to form their own ideas about the further future.
I’ve heard people say, that it’s so hard to make ML do something meaningful that they just can’t imagine it would do something like recursive self-improvement. AI safety in these terms means making sure the ML model performs as well in deployment as in the development.
Another trend I noticed, but I don’t have much data for it, is that the somewhat older generation (35+) is mostly interested in the technical problems and don’t feel that much responsibility for how the results are used. Vs. the generation of 25 − 35 care much more about the future. I’m noticing similarities with climate change awareness, although the generation delimitations might differ.