Now instead of being confused about “the probability that joining the AI safety research community would actually avert existential catastrophe”, I’m confused about “the total existential risk associated with developing highly capable AI systems” and “[w]hat percentage of the bad scenarios should we expect [doubling the AI safety committee] to avert”...
Yeah, I think these are still pretty hard and uncertain, but I find it substantially easier to have a direct intuition of what kind of range is reasonable than I do for the original question.
“the total existential risk associated with developing highly capable AI systems” means the chance of existential risk from AI and “[w]hat percentage of the bad scenarios should we expect [doubling the AI safety committee] to avert” means how much would that risk be reduced if the AI safety effort was doubled. Or you’re having difficulty estimating the probabilities?
Well it’s important to have an estimate; have you looked at others’ estimates?
Have you heard the aphorism ‘curiosity seeks to annihilate itself’? Like these are a set of physical and anthropological questions, that are important to narrow our uncertainty on—so saying ‘i haven’t so far envisageg how evidence could be brought to bear on these questions’ is depressing!
Hey Ryan, I appreciate you helping out, but I’m finding you to be quite condescending. Maybe you weren’t aware that’s how you’re coming across to me? It’s not that I don’t want your help, of course!
I know that I could spend a lot of time attempting to reduce my uncertainty by investigating and I have done some, but I only have so much time!
From my perspective, the previous posts read like someone cheerfully breaking conversational/intellectual norms by saying ‘your question confuses me’, without indicating what you tried / how to help you, making it hard to respond productively.
the previous posts read like someone cheerfully breaking conversational/intellectual norms by saying ‘your question confuses me’, without indicating what you tried / how to help you, making it hard to respond productively.
Now instead of being confused about “the probability that joining the AI safety research community would actually avert existential catastrophe”, I’m confused about “the total existential risk associated with developing highly capable AI systems” and “[w]hat percentage of the bad scenarios should we expect [doubling the AI safety committee] to avert”...
Yeah, I think these are still pretty hard and uncertain, but I find it substantially easier to have a direct intuition of what kind of range is reasonable than I do for the original question.
I agree. I’m sorry for coming across as rude with this comment—I do think the framework provides for a meaningful reduction.
“the total existential risk associated with developing highly capable AI systems” means the chance of existential risk from AI and “[w]hat percentage of the bad scenarios should we expect [doubling the AI safety committee] to avert” means how much would that risk be reduced if the AI safety effort was doubled. Or you’re having difficulty estimating the probabilities?
Yeah, I meant that.
Does it just seem impossible to you? Can you not think of any related questions, or related problems?
It does seem quite hard, but I admittedly haven’t thought about it very much. I imagine it’s not something I’d be generally good at estimating.
Well it’s important to have an estimate; have you looked at others’ estimates?
Have you heard the aphorism ‘curiosity seeks to annihilate itself’? Like these are a set of physical and anthropological questions, that are important to narrow our uncertainty on—so saying ‘i haven’t so far envisageg how evidence could be brought to bear on these questions’ is depressing!
Hey Ryan, I appreciate you helping out, but I’m finding you to be quite condescending. Maybe you weren’t aware that’s how you’re coming across to me? It’s not that I don’t want your help, of course!
I know that I could spend a lot of time attempting to reduce my uncertainty by investigating and I have done some, but I only have so much time!
And yes, I have read the LW sequences. ;)
Sorry, I’d redrafted it to try to avoid that.
From my perspective, the previous posts read like someone cheerfully breaking conversational/intellectual norms by saying ‘your question confuses me’, without indicating what you tried / how to help you, making it hard to respond productively.
Yes, you’re right. I’m sorry about that.