”or other such nonsense that advocates never taking on risks even when the benefits clearly dominate”
An important point to note here—the people who suffer the risks and the people who reap the benefits are very rarely the same group. Deciding to use an unsafe AI system (whether presently or in the far future) using a risks/benefits analysis goes wrong so often because one man’s risk is another’s benefit.
Example: The risk of lung damage from traditional coal mining compared to the industrial value of the coal is a very different risk/reward analysis for the miner and the mine owner. Same with AI.
Good post, thank you.
”or other such nonsense that advocates never taking on risks even when the benefits clearly dominate”
An important point to note here—the people who suffer the risks and the people who reap the benefits are very rarely the same group. Deciding to use an unsafe AI system (whether presently or in the far future) using a risks/benefits analysis goes wrong so often because one man’s risk is another’s benefit.
Example: The risk of lung damage from traditional coal mining compared to the industrial value of the coal is a very different risk/reward analysis for the miner and the mine owner. Same with AI.