On the other hand, in the community, there is a lot of focus on existential risk, which is a true thing, but it is definitely not necessary to make the point about the importance of AI Safety.
I guess my worry is that if we drop the focus on existential risk then academia may not be helping us solve the problem we need to solve. (As a reference, when I first started EA movement-building I was worried about talking too much about AI Safety because it seemed weird, but in retrospect I consider this to have been a mistake since the winds of change were in the air).
Perhaps we should be thinking about this from the opposite perspective. How can we extend the range of what can be published in academia? We can already identify things like Superintelligence, Stuart Russell’s book and Concrete Problems in AI Safety that have helped build credibility.
I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn’t. If someone comes to me with such kind of argument I will go defensive really quickly, and he’ll have to waste a lot of effort to convince me there is a slight chance that he’s right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.
Perhaps we should be thinking about this from the opposite perspective. How can we extend the range of what can be published in academia?
I don’t think this is a good idea: there are a couple of reasons why academic publishing is so stringent: to avoid producing blatant useless articles and to measure progress. I argue we want to play by the rules here, both because we would risk being seen as crazy people and because we want to publish sound work.
I guess my worry is that if we drop the focus on existential risk then academia may not be helping us solve the problem we need to solve. (As a reference, when I first started EA movement-building I was worried about talking too much about AI Safety because it seemed weird, but in retrospect I consider this to have been a mistake since the winds of change were in the air).
Perhaps we should be thinking about this from the opposite perspective. How can we extend the range of what can be published in academia? We can already identify things like Superintelligence, Stuart Russell’s book and Concrete Problems in AI Safety that have helped build credibility.
I think it is easy to convince someone to work on topic X if you argue it would be very positive rather than warning them that everyone could literally die if he doesn’t. If someone comes to me with such kind of argument I will go defensive really quickly, and he’ll have to waste a lot of effort to convince me there is a slight chance that he’s right. And even if I have the time to listen to him through and I give him the benefit of the doubt I will come out with awkward feelings, not precisely the ones that make me want to put effort into his topic.
I don’t think this is a good idea: there are a couple of reasons why academic publishing is so stringent: to avoid producing blatant useless articles and to measure progress. I argue we want to play by the rules here, both because we would risk being seen as crazy people and because we want to publish sound work.
Well, if you have a low risk preference it is possible to incrementally push things out.