Part of me wonders whether working for a company on the cutting edge of AI development should almost disqualify you from being part of the public AI safety discourse.
Strong agreement downvote from me. This line of thought seems so intuitively dangerous. You want to disqualify people making powerful AI from discussions on how to make powerful AI safer? I’m having trouble understanding why this should be a good idea.
Not disqualify then from private discussions—of course they’re needs to be loads of private discussions, but from prominent public discussions. Why is that intuitively dangerous?
I’m uncertain about this and keen to hear the counter arguments.
Its intuitive to me that People who are paid to develop something potentially dangerous as fast as possible (weapons manufacturers, tobacco, AI) should not be the ones at the forefront of public discussion nor the ones making decisions about what should be allowed and what not. They will be compromised and biased—the very value of what they do with their lives is at stake. they are likely to skew the discourse away from the rational
The ideal situation might be to have enough powerful AI programmers working on AI safety and governance independently of the companies, that they could lead the discourse and make the discussions.
I’m sure there are strong arguments against this and I’m keen to hear then.
Part of the goal is to persuade them to act more safely, and it’s easier to do this if they are able to explain their perspective. Also, it allows others to evaluate their arguments. We can’t adopt a rule that “people accused of doing something dangerous can’t defend themselves” because sometimes after evaluating the arguments they are in the right—e.g. nuclear power, GMOs.
Thanks that’s a good point. I hope though that they have less sway than independent people arguing in either direction. I would hope in the case of nuclear power and GMOs it would be independent advocates (academics, public, think thanks) arguing for it who convinced is rather than Monsanto and
power plant manufacturers.
Strong agreement downvote from me. This line of thought seems so intuitively dangerous. You want to disqualify people making powerful AI from discussions on how to make powerful AI safer? I’m having trouble understanding why this should be a good idea.
Not disqualify then from private discussions—of course they’re needs to be loads of private discussions, but from prominent public discussions. Why is that intuitively dangerous?
I’m uncertain about this and keen to hear the counter arguments.
Its intuitive to me that People who are paid to develop something potentially dangerous as fast as possible (weapons manufacturers, tobacco, AI) should not be the ones at the forefront of public discussion nor the ones making decisions about what should be allowed and what not. They will be compromised and biased—the very value of what they do with their lives is at stake. they are likely to skew the discourse away from the rational
The ideal situation might be to have enough powerful AI programmers working on AI safety and governance independently of the companies, that they could lead the discourse and make the discussions.
I’m sure there are strong arguments against this and I’m keen to hear then.
Part of the goal is to persuade them to act more safely, and it’s easier to do this if they are able to explain their perspective. Also, it allows others to evaluate their arguments. We can’t adopt a rule that “people accused of doing something dangerous can’t defend themselves” because sometimes after evaluating the arguments they are in the right—e.g. nuclear power, GMOs.
Thanks that’s a good point. I hope though that they have less sway than independent people arguing in either direction. I would hope in the case of nuclear power and GMOs it would be independent advocates (academics, public, think thanks) arguing for it who convinced is rather than Monsanto and power plant manufacturers.
But I don’t know those stories!