what we want from progress in our airplanes is, first and foremost, safety.
I dispute that this is what I want from airplanes. First and foremost, what I want from an airplane is for it to take me from point A to point B at a high speed. Other factors are important too, including safety, comfort, and reliability. But, there are non-trivial tradeoffs between these other factors: for example, if we could make planes 20% safer but at the cost that flights took twice as long, I would not personally want to take this trade.
You might think this is a trivial objection to your analogy, but I don’t think it is. In general, humans have a variety of values, and are not single-mindedly focused on safety at the cost of everything else. We put value on safety, but we also put value on capabilities, and urgency, along with numerous other variables. As another analogy, if we were to have delayed the adoption of the covid vaccine by a decade to perform more safety testing, that cost would have been substantial, even if it were done in the name of safety.
In my view, the main reason not to delay AI comes from a similar line of reasoning. By delaying AI, we are delaying all the technological progress and economic value that could be hastened by AI, and this cost is not small. If you think that accelerated technological progress could save your life, cure aging, and eliminate global poverty, then from the perspective of existing humans, delaying AI can start to sound like it mainly prolongs the ongoing catastrophes in the world, rather than preventing new ones.
It might be valuable to delay AI even at the price of letting billions of humans to die of aging, prolonging the misery of poverty, and so on. Whether it’s worth delaying depends, of course, on what we are getting in exchange for the delay. However, unless you think this price is negligible, or you’re simply very skeptical that accelerated technological progress will have these effects, then this is not an easy dilemma.
I dispute that this is what I want from airplanes. First and foremost, what I want from an airplane is for it to take me from point A to point B at a high speed. Other factors are important too, including safety, comfort, and reliability. But, there are non-trivial tradeoffs between these other factors: for example, if we could make planes 20% safer but at the cost that flights took twice as long, I would not personally want to take this trade.
You might think this is a trivial objection to your analogy, but I don’t think it is. In general, humans have a variety of values, and are not single-mindedly focused on safety at the cost of everything else. We put value on safety, but we also put value on capabilities, and urgency, along with numerous other variables. As another analogy, if we were to have delayed the adoption of the covid vaccine by a decade to perform more safety testing, that cost would have been substantial, even if it were done in the name of safety.
In my view, the main reason not to delay AI comes from a similar line of reasoning. By delaying AI, we are delaying all the technological progress and economic value that could be hastened by AI, and this cost is not small. If you think that accelerated technological progress could save your life, cure aging, and eliminate global poverty, then from the perspective of existing humans, delaying AI can start to sound like it mainly prolongs the ongoing catastrophes in the world, rather than preventing new ones.
It might be valuable to delay AI even at the price of letting billions of humans to die of aging, prolonging the misery of poverty, and so on. Whether it’s worth delaying depends, of course, on what we are getting in exchange for the delay. However, unless you think this price is negligible, or you’re simply very skeptical that accelerated technological progress will have these effects, then this is not an easy dilemma.