This is just a first impression, but I’m curious about what seems a crucial point—that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go ‘well’ by default. I’m very curious to see what guides your intuition there, or if there’s some other way that first-pass impression is wrong.
I’m curious about similar arguments that apply to bio & other plausible x-risks too, given what’s implied by low x-risk credence
Curious if you disagree with Jessica’s key claim, which is “McKinsey << EA for impact”? I agree Jessica is overstating the case for “McKinsey ⇐ 0″, but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.
Subpoints:
Current market incentives don’t address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
McKinsey for earn-to-learn/give could theoretically be justified, but that doesn’t contradict Jessica’s point of spending money to get EAs
Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably
Agree we should usually avoid saying poorly-justified things when it’s not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.