Does John feel like he learned generalizable lessons/heuristics/updates from studying climate change that ports over to studying other GCRs like unaligned AI or novel pandemics? E.g. does he trust mainstream academics vs EA researchers more/less, does he doubt doomers more, etc?
Does John feel like he learned generalizable lessons/heuristics/updates from studying climate change that ports over to studying other GCRs like unaligned AI or novel pandemics? E.g. does he trust mainstream academics vs EA researchers more/less, does he doubt doomers more, etc?
will try and bring this to the conversation,thanks for the thoughts.