Executive summary: A survey of elite Chinese university students found they are generally optimistic about AI’s benefits, strongly support government regulation, and view AI as less of an existential threat compared to other risks, though they believe US-China cooperation is necessary for safe AI development.
Key points:
80% of students believe AI will do more good than harm for society, higher than in Western countries.
85% support government regulation of AI, despite high optimism about its benefits.
Students ranked AI lowest among potential existential threats to humanity.
61% believe US-China cooperation is necessary for safe AI development.
Surveillance was rated as the top AI-related concern, followed by misinformation and existential risk.
50% agree AI will eventually be more intelligent than humans, lower than estimates from other surveys.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I’ve noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students’ immediate reaction was to challenge the study’s methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things “better.”
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.
Executive summary: A survey of elite Chinese university students found they are generally optimistic about AI’s benefits, strongly support government regulation, and view AI as less of an existential threat compared to other risks, though they believe US-China cooperation is necessary for safe AI development.
Key points:
80% of students believe AI will do more good than harm for society, higher than in Western countries.
85% support government regulation of AI, despite high optimism about its benefits.
Students ranked AI lowest among potential existential threats to humanity.
61% believe US-China cooperation is necessary for safe AI development.
Surveillance was rated as the top AI-related concern, followed by misinformation and existential risk.
50% agree AI will eventually be more intelligent than humans, lower than estimates from other surveys.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I’ve noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students’ immediate reaction was to challenge the study’s methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things “better.”
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.