What’s an example of “here is an argument for why that reason doesn’t apply” that you think is wrong?
And are you claiming that Nate or I are “assigning maximal probability to AI doom”, or doing this kind of qualitative black-and-white reasoning? If so, why?
Rereading the post, I think that it has a bunch statements about what Soares believes, but it doesn’t have that many mechanisms, pathways, counter-considerations, etc.
E.g.,:
The world’s overall state needs to be such that AI can be deployed to make things good. A non-exhaustive list of things that need to go well for this to happen follows
The world needs to admit of an AGI deployment strategy (compatible with realistic alignable-capabilities levels for early systems) that prevents the world from being destroyed if executed.
At least one such strategy needs to be known and accepted by a leading organization.
Somehow, at least one leading organization needs to have enough time to nail down AGI, nail down alignable AGI, actually build+align their system, and deploy their system to help.
This very likely means that there needs to either be only one organization capable of building AGI for several years, or all the AGI-capable organizations need to be very cautious and friendly and deliberately avoid exerting too much pressure upon each other.
It needs to be the case that no local or global governing powers flail around (either prior to AGI, or during AGI development) in ways that prevent a (private or public) group from saving the world with AGI.
This is probably a good statement of what Soares thinks needs to happen, but it is not a case for that, so I am left to evaluate the statements and the claim that they are conjunctive with reference to their intuitive plausibility.
I think I might be a bit dense here.
E.g.,:
It needs to be the case that no local or global governing powers flail around (either prior to AGI, or during AGI development) in ways that prevent a (private or public) group from saving the world with AGI.
Idk, he later mentions the US government’s COVID response, but I think the relevant branch of the government for dealing with AGI threats would probably be the department of defense, which seems much more competent, and seems capable of plays like blocking exports of semiconductor manufacturing equipment to China.
Re: Arguments against conjunctiveness
So here the thing is that I don’t find Nate’s argument particularly compelling, and after a few times of the following pattern:
Here is a reason to think that AI might not happen/might not cause an existential risk
Here is an argument for why that reason doesn’t apply, which could range from wrong to somewhat compelling to very compelling
[Advocate proceeds to take the argument on 2. as sort of permission in their mind to assign maximal probability to AI doom]
I grow tired of it, and it starts to irk me.
What’s an example of “here is an argument for why that reason doesn’t apply” that you think is wrong?
And are you claiming that Nate or I are “assigning maximal probability to AI doom”, or doing this kind of qualitative black-and-white reasoning? If so, why?
Nate’s post, for reference, was: AGI ruin scenarios are likely (and disjunctive)
Rereading the post, I think that it has a bunch statements about what Soares believes, but it doesn’t have that many mechanisms, pathways, counter-considerations, etc.
E.g.,:
This is probably a good statement of what Soares thinks needs to happen, but it is not a case for that, so I am left to evaluate the statements and the claim that they are conjunctive with reference to their intuitive plausibility.
I think I might be a bit dense here.
E.g.,:
Idk, he later mentions the US government’s COVID response, but I think the relevant branch of the government for dealing with AGI threats would probably be the department of defense, which seems much more competent, and seems capable of plays like blocking exports of semiconductor manufacturing equipment to China.