No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives—I listen to a few AI podcasts and browse the forum now and then—why am I only hearing about it now after a couple of years? Not sure what to think of it—my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.
No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives—I listen to a few AI podcasts and browse the forum now and then—why am I only hearing about it now after a couple of years? Not sure what to think of it—my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.
I think most people don’t talk about it because they don’t think it’s a big deal. FWIW I don’t think it’s a huge deal but it’s still concerning.