Imagine: would you board an airplane if 50% of airplane engineers who built it said there was a 10% chance that everybody on board dies?
In the context of the OP, the thought experiment would need to be extended.
“Would you risk a 10% chance of a deadly crash to go to [random country]” → ~100% of people reply no.
“Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?” → One would expect a much more mixed response.
The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.
I agree with you more than with Akash/Tristan Harris here, but note that death and Utopia are not the only possible outcomes! It’s more like “Would you risk a 10% of a deadly crash for a chance to go to a Utopia without material scarcity, conflict, disease”
In the context of the OP, the thought experiment would need to be extended.
“Would you risk a 10% chance of a deadly crash to go to [random country]” → ~100% of people reply no.
“Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?” → One would expect a much more mixed response.
The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.
I agree with you more than with Akash/Tristan Harris here, but note that death and Utopia are not the only possible outcomes! It’s more like “Would you risk a 10% of a deadly crash for a chance to go to a Utopia without material scarcity, conflict, disease”