I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I’ve noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students’ immediate reaction was to challenge the study’s methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things “better.”
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I’ve noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students’ immediate reaction was to challenge the study’s methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things “better.”
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.