Option value considerations dictate that we continue doing AI safety research even if we’re unsure of its value because it’s much easier to stop a research program than to start one.
I think the opposite is often true. Once there are people who get compensated for doing X it can be very hard to stop X. (Especially if it’s harder for impartial people, who are not experts-in-X, to evaluate X.)
I think the opposite is often true. Once there are people who get compensated for doing X it can be very hard to stop X. (Especially if it’s harder for impartial people, who are not experts-in-X, to evaluate X.)
Yeah I think that’s very reasonable