Value my time/attention more than ever before (don’t spend time/attention on degenerate things or things [even minor inconveniences like losing what I’m trying to precisely say] that amplify outwards over time and rob my ability to be the highest-impact person I can be). Interesting things will happen in the next 4-5 years.
Be truer to myself and not obsess so much about fixing what I’m weak in that isn’t super-fixable. I have weird cognitive strengths and weird cognitive weaknesses
Freak out way less about climate change (tbh super-fast fusion timelines are part of this)
In general trust my intuition (and what feels right to me) way way way more, and feel much less emotional onus to defend my “weird background” than before (I always seem to have the “weirdest background” of anyone in the room).
I am still just as longevity-focused as I am (especially in the scenario where some slowdown does seem to happen) and think that longevity has relevant for AI safety (slowing down the brain decline of AI researchers is important for getting them to realistically “keep in control with AI” and “cyborg-integrate”)
The upside to downside ratio of Adderall becomes more “worth it” for its level of neurotoxicity risk (also see @typedfemale on twitter)
Value my time/attention more than ever before (don’t spend time/attention on degenerate things or things [even minor inconveniences like losing what I’m trying to precisely say] that amplify outwards over time and rob my ability to be the highest-impact person I can be). Interesting things will happen in the next 4-5 years.
Be truer to myself and not obsess so much about fixing what I’m weak in that isn’t super-fixable. I have weird cognitive strengths and weird cognitive weaknesses
Freak out way less about climate change (tbh super-fast fusion timelines are part of this)
In general trust my intuition (and what feels right to me) way way way more, and feel much less emotional onus to defend my “weird background” than before (I always seem to have the “weirdest background” of anyone in the room).
I am still just as longevity-focused as I am (especially in the scenario where some slowdown does seem to happen) and think that longevity has relevant for AI safety (slowing down the brain decline of AI researchers is important for getting them to realistically “keep in control with AI” and “cyborg-integrate”)
The upside to downside ratio of Adderall becomes more “worth it” for its level of neurotoxicity risk (also see @typedfemale on twitter)