I’d genuinely welcome feedback on whether the formalization bottleneck resonates as a priority here
It is not clear from your post why getting more people to contribute to mathlib is good for AI safety.
So far, it sounds like this is more about the AI safety risks from AI being dumb rather than AI being smart. The latter is more important for existential risk, but maybe you mean non-xrisk safety?
Suppose you have a math AI that spits out lean proofs, what would you use it for? Which mathematical questions would you like it to tackle?
I would also note that there are already some math AI startups such as Axiom and Harmonic, who also do Lean (and they don’t seem to be about AI safety at all). EA often focuses on things that are neglected, and it is not clear why non-profit money is needed for the same goals.
I don’t want to be too discouraging, Lean could be useful somehow for reducing AI xrisk, but imo you haven’t made the case.
Am I missing something or does this argument make no sense? As far as I can tell, Musk can easily fulfill his giving pledge by giving to his preferred not-left-wing nonprofits without deferring to Bill Gates.