I recently found a Swiss AI survey that indicates that many people do care about AI. [This is only very weak evidence against your thesis, but might still interest you 🙂.]
Sample size: Population – 1245 people Opinion Leaders – 327 people [from the economy, public administration, science and education]
The question: “Do you fear the emergence of an “artificial super-intelligence”, and that robots will take power over humans?”
From the general population, 11% responded “Yes, very”, and 37% responded “Yes, a bit”. So, half of the respondents (that expressed any sentiment) were at least somewhat worried.
The ‘opinion leaders’ however are much less concerned. Only 2% have a lot of fear and 23% have a bit of fear.
But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the ‘first 3 risks of concern’ out of a choice of 5 risks. Only for 12% of respondents was it the biggest concern. ‘Opinion leaders’ were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.
Question: “Which of the potential risks of the development of artificial intelligence concerns you the most? And the second most? And the third most?” Option 1: The risks related to personal security and data protection. Option 2: The risk of misinterpretation by machines. Option 3: Loss of jobs. Option 4: Artificial intelligence that surpasses human intelligence. Option 5: Others
I recently found a Swiss AI survey that indicates that many people do care about AI.
[This is only very weak evidence against your thesis, but might still interest you 🙂.]
Population – 1245 people
Opinion Leaders – 327 people [from the economy, public administration, science and education]
The question:
“Do you fear the emergence of an “artificial super-intelligence”, and that robots will take power over humans?”
From the general population, 11% responded “Yes, very”, and 37% responded “Yes, a bit”.
So, half of the respondents (that expressed any sentiment) were at least somewhat worried.
The ‘opinion leaders’ however are much less concerned. Only 2% have a lot of fear and 23% have a bit of fear.
These are interesting findings! Be interesting to see if these kind of results are similar elsewhere.
But the same study also found that only 41% of respondents from the general population placed AI becoming more intelligent than humans into the ‘first 3 risks of concern’ out of a choice of 5 risks.
Only for 12% of respondents was it the biggest concern. ‘Opinion leaders’ were again more optimistic – only 5% of them thought AI intelligence surpassing human intelligence was the biggest concern.
Option 1: The risks related to personal security and data protection.
Option 2: The risk of misinterpretation by machines.
Option 3: Loss of jobs.
Option 4: Artificial intelligence that surpasses human intelligence.
Option 5: Others