Do you think the wording “Have you heard about the concept of existential risk from Advanced AI? Do you think the risk is small or negligible, and that advanced AI safety concerns are overblown? ” might have biased your sample in some way?
E.g. I can imagine people who are very worried about alignment but don’t think current approaches are tractable.
In case “I can imagine” was literal, then let me serve as proof-of-concept, as a person who thinks the risk is high but there’s nothing we can do about it short of a major upheaval of the culture of the entire developed world.
The sample is biased in many ways: Because of the places where I recruited, interviews that didn’t work out because of timezone difference, people who responded too late, etc. I also started recruiting on Reddit and then dropped that in favour of Facebook.
So this should not be used as a representative sample, rather it’s an attempt to get a wide variety of arguments.
I did interview some people who are worried about alignment but don’t think current approaches are tractable. And quite a few people who are worried about alignment but don’t think it should get more resources.
Referring to my two basic questions listed at the top of the post, I had a lot of people say “yes” to (1). So they are worried about alignment. I originally planned to provide statistics on agreement / disagreement on questions 1⁄2 but it turned out that it’s not possible to make a clear distinction between the two questions—most people, when discussing (2) in detail, kept referring back to (1) in complex ways.
Do you think the wording “Have you heard about the concept of existential risk from Advanced AI? Do you think the risk is small or negligible, and that advanced AI safety concerns are overblown? ” might have biased your sample in some way?
E.g. I can imagine people who are very worried about alignment but don’t think current approaches are tractable.
In case “I can imagine” was literal, then let me serve as proof-of-concept, as a person who thinks the risk is high but there’s nothing we can do about it short of a major upheaval of the culture of the entire developed world.
The sample is biased in many ways: Because of the places where I recruited, interviews that didn’t work out because of timezone difference, people who responded too late, etc. I also started recruiting on Reddit and then dropped that in favour of Facebook.
So this should not be used as a representative sample, rather it’s an attempt to get a wide variety of arguments.
I did interview some people who are worried about alignment but don’t think current approaches are tractable. And quite a few people who are worried about alignment but don’t think it should get more resources.
Referring to my two basic questions listed at the top of the post, I had a lot of people say “yes” to (1). So they are worried about alignment. I originally planned to provide statistics on agreement / disagreement on questions 1⁄2 but it turned out that it’s not possible to make a clear distinction between the two questions—most people, when discussing (2) in detail, kept referring back to (1) in complex ways.