When you reason using probabilities, the more examples you have to reason over, the more likely your estimate is to be correct.
If you make a bucket of “all technology”—because like you say, the reference class for AI is fuzzy—you consider the examples of all technology.
I assume you agree that the net EV of “all technology” is positive.
The narrower you make it “is AGI exactly like a self replicating bioweapon” you can choose a reference class that has a negative EV, but few examples. I agree and you agree, self replicating bioweapons are negative EV.
But...that kind of bucketing based on information you don’t have is false reasoning. You’re wrong. You don’t have the evidence, yet, to prove AGIs reference class because you have no AGI to test.
Correct reasoning for a technology that doesn’t even exist forces you to use a broad reference class. You cannot rationally do better? (question mark is because I don’t know of an algorithm that lets you do better.)
Let me give an analogy. There are medical treatments where your bone marrow is replaced. These have terrible death rates, sometimes 66 percent. But if you don’t get the bone marrow replacement your death rate is 100 percent. So it’s a positive EV decision and you do not know the bucket you will fall in, [survivor| ! survivor]. So the rational choice is to say “yes” to the treatment and hope for the best. (ignoring pain experienced for simplicity)
The people that smile at your sadly—they are correct and the above is why. The reason they are sad is well, we as a species could in fact end up out of luck, but this is a decision we still must take.
All human scientific and decisionmaking is dependent on past information. If you consider all past information we have and apply it to the reference class of “AI” you end up with certain conclusions. (It’ll probably quench, it’s probably a useful tool, we probably can’t stop everyone from building it).
You can’t reason on unproven future information. Even if you may happen to be correct.
When you reason using probabilities, the more examples you have to reason over, the more likely your estimate is to be correct.
If you make a bucket of “all technology”—because like you say, the reference class for AI is fuzzy—you consider the examples of all technology.
I assume you agree that the net EV of “all technology” is positive.
The narrower you make it “is AGI exactly like a self replicating bioweapon” you can choose a reference class that has a negative EV, but few examples. I agree and you agree, self replicating bioweapons are negative EV.
But...that kind of bucketing based on information you don’t have is false reasoning. You’re wrong. You don’t have the evidence, yet, to prove AGIs reference class because you have no AGI to test.
Correct reasoning for a technology that doesn’t even exist forces you to use a broad reference class. You cannot rationally do better? (question mark is because I don’t know of an algorithm that lets you do better.)
Let me give an analogy. There are medical treatments where your bone marrow is replaced. These have terrible death rates, sometimes 66 percent. But if you don’t get the bone marrow replacement your death rate is 100 percent. So it’s a positive EV decision and you do not know the bucket you will fall in, [survivor| ! survivor]. So the rational choice is to say “yes” to the treatment and hope for the best. (ignoring pain experienced for simplicity)
The people that smile at your sadly—they are correct and the above is why. The reason they are sad is well, we as a species could in fact end up out of luck, but this is a decision we still must take.
All human scientific and decisionmaking is dependent on past information. If you consider all past information we have and apply it to the reference class of “AI” you end up with certain conclusions. (It’ll probably quench, it’s probably a useful tool, we probably can’t stop everyone from building it).
You can’t reason on unproven future information. Even if you may happen to be correct.