I donât think my reasoning is around wanting to stay sane at all. Most of my reasoning revolves around base rates, examining current systems, and assessing current progress.
I think Iâm in this peculiar position where I have ~medium (maybe many reading this would consider them to be long) timelines and fairly low âp(doom)â, and I still think AI risk is among the most important things to work on.
I obviously canât prove I am not biased in such a way, but I donât think that is a fair assumption.