Hmm, I’m not confident that Bob is wrong here. It seems to me that there’s a quite plausible argument that EA’s involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that’s not necessarily so.
Your argument seems to assume that one should “shut up and multiply” and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn’t trust naive EV estimates.
My claim is that if you’re worried, the correct response is to actually try to make the astronomical problem/cause go better, not to give up on it. I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future—rather than ultimately thinking there’s nothing I can do to make the thing go better.
I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future
What do you think about the arguments for cluelessness from imprecision, e.g., here? (I explain more why I think we’re clueless even about the things you list, here.)
Hmm, I’m not confident that Bob is wrong here. It seems to me that there’s a quite plausible argument that EA’s involvement in AI has been net-negative, possibly so net-negative as to cancel out all of the rest of EA. You seem to assume that this was knowable in advance, but that’s not necessarily so.
Your argument seems to assume that one should “shut up and multiply” and then run with that estimated EV number; but there have been many arguments on this forum and elsewhere about why we shouldn’t trust naive EV estimates.
My claim is that if you’re worried, the correct response is to actually try to make the astronomical problem/cause go better, not to give up on it. I think if you’re savvy you will probably find a way to make the astronomical thing go better—such as doing strategy/prioritization/deconfusion work, or working on robustly good intermediate desiderata, or building skills/money in case there’s more clarity in the future—rather than ultimately thinking there’s nothing I can do to make the thing go better.
What do you think about the arguments for cluelessness from imprecision, e.g., here? (I explain more why I think we’re clueless even about the things you list, here.)
I haven’t engaged with your posts and so don’t know the arguments.
I respect that you and a few others legitimately feel deeply clueless. Alice and Bob are just whining about how not everything is clear-cut.