Thanks for doing this! I think the most striking part of what you found is the donations to representatives who sit on the subcommittee that oversees the CFTC (i.e. the House Agriculture Subcommittee on Commodity Exchanges, Energy, and Credit), so I wanted to look into this more. From a bit of Googling:
It looks like you’re right that Rep. Delgado sits on (and is even the Chair of) this subcommittee.
On the other hand, it looks like Rep. Spanberger doesn’t actually sit on this subcommittee, and hasn’t done so since 2021. In other words, she hasn’t been on this subcommittee since the Protect our Future PAC was founded (which was early 2022).
Spanberger’s Wikipedia page does say she sits on this subcommittee, but Spanberger’s own website (both now and before the most recent elections) and the Wikipedia page on the subcommittee don’t list her as having served on this subcommittee in the 2021-23 or current session of Congress.
The latter source also says she served on the subcommittee before 2021, so my guess is that Spanberger’s Wikipedia page just has outdated info.
(I don’t think this settles doubts about the PAC.)
I didn’t spend much time on this, so I very possibly missed or misinterpreted things.
Fair! Sorry for the slow reply, I missed the comment notification earlier.
I could have been clearer in what I was trying to point at with my comment. I didn’t mean to fault you for not meeting an (unmade) challenge to list all your assumptions—I agree that would be unreasonable.
Instead, I meant to suggest an object-level point: that the argument you mentioned seems pretty reliant on a controversial discontinuity assumption—enough that the argument alone (along with other, largely uncontroversial assumptions) doesn’t make it “quite easy to reach extremely dire forecasts about AGI.” (Though I was thinking more about 90%+ forecasts.)
(That assumption—i.e. the main claims in the 3rd paragraph of your response—seems much more controversial/non-obvious among people in AI safety than the other assumptions you mention, as evidenced by researchers criticizing it and researchers doing prosaic AI safety work.)