Sadly I couldn’t respond to this post two weeks ago but here I go.
First of all I’m not sure I understand your position, but I think that you believe that if we push for other types of regulation either:
that would be enough to be safe from dangerous AI or
we’ll be able to slow down AI development enough to develop measures to be safe from dangerous AI
I’m confused between the two because you write
Advanced AI systems pose grave threats and we don’t know how to mitigate them.
That I understand as you believing we don’t have know those measures right now, but you also write
If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
That if we agree there’s no way to prove it then you’re pretty much talking about a pause.
If your point is the first one I would disagree with it and I think even OpenAI when they say we don’t know yet how to align a SI.
If your point is the second one, then my problem with that is that I don’t think that would give us close to the same amount of time than a pause. Also it could make most people believe risks from AI, including X-risks, are safeguarded now, and we could lose support because of that. And all of that would lead to more money in the industry that could lead to regulatory capture recursively.
All of that is also related to
it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
Which I’m not sure that it’s true. Of course this depends a lot in how much you think that the current work on alignment is close to being enough to make us safe. Are we going parallel enough to the precipice that we’ll be able to reach or steer in time to reach an utopia? Or are we going towards it and we will have some brief progress before falling? Would that be closer to the ideal? Anyways, the ideal is the enemy of the good or the truth or something.
Lastly, after arguing why a pause would be a lot better than other regulations, I’ll give you that of course it would be harder to get/ less “politically palatable” which is arguably the main point of the post. But I don’t how many orders of magnitude. With a pause you win over people who think safer AI isn’t enough or it’s just marketing from the biggest companies and nations. And also, talking about marketing, I think pausing AI is a slogan that can draw a lot more attention, which I think is good given that most people seem to want regulation.
Sadly I couldn’t respond to this post two weeks ago but here I go.
First of all I’m not sure I understand your position, but I think that you believe that if we push for other types of regulation either:
that would be enough to be safe from dangerous AI or
we’ll be able to slow down AI development enough to develop measures to be safe from dangerous AI
I’m confused between the two because you write
That I understand as you believing we don’t have know those measures right now, but you also write
That if we agree there’s no way to prove it then you’re pretty much talking about a pause.
If your point is the first one I would disagree with it and I think even OpenAI when they say we don’t know yet how to align a SI.
If your point is the second one, then my problem with that is that I don’t think that would give us close to the same amount of time than a pause. Also it could make most people believe risks from AI, including X-risks, are safeguarded now, and we could lose support because of that. And all of that would lead to more money in the industry that could lead to regulatory capture recursively.
All of that is also related to
Which I’m not sure that it’s true. Of course this depends a lot in how much you think that the current work on alignment is close to being enough to make us safe. Are we going parallel enough to the precipice that we’ll be able to reach or steer in time to reach an utopia? Or are we going towards it and we will have some brief progress before falling? Would that be closer to the ideal? Anyways, the ideal is the enemy of the good or the truth or something.
Lastly, after arguing why a pause would be a lot better than other regulations, I’ll give you that of course it would be harder to get/ less “politically palatable” which is arguably the main point of the post. But I don’t how many orders of magnitude. With a pause you win over people who think safer AI isn’t enough or it’s just marketing from the biggest companies and nations. And also, talking about marketing, I think pausing AI is a slogan that can draw a lot more attention, which I think is good given that most people seem to want regulation.