This is a true, counterfactual match, and we will only receive the equivalent amount to what we can raise.
What will happen to the money counterfactually? Presumably it will be donated to other things the match funder thinks are roughly as good as GWWC?
I think misaligned AI values should be expected to be worse than human values, because it’s not clear that misaligned AI systems would care about eg their own welfare.
Inasmuch as we expect misaligned AI systems to be conscious (or whatever we need to care about them) and also to be good at looking after their own interests, I agree that it’s not clear from a total utilitarian perspective that the outcome would be bad.
But the “values” of a misaligned AI system could be pretty arbitrary, so I don’t think we should expect that.