I’m thinking about writing something on AI’s implications for mental healthcare—e.g., within 5-10 years should we expect to benefit from much more advanced (and side-effect-free) anti-depressants and anti-anxiety medication, AI-boosted meditation and CBT techniques, neurological hardwiring, etc.? Could this end up being a net negative if this leads to people with access to these becoming much more complacent about the suffering of others? I’ve only done a very shallow dive into this so it would probably be very quick and scrappy
Great idea. I was wondering whether to write about this sort of angle from the opposite (pessimistic) view of if knowledge work becomes less common/disappears entirely and humans increasingly engage in less cognitively effortful forms of leisure whether this would need to be counteracted with forms of brain training. Arguably this may already be becoming an issue with people unable to commit their attention span to reading longer form text.
I’m thinking about writing something on AI’s implications for mental healthcare—e.g., within 5-10 years should we expect to benefit from much more advanced (and side-effect-free) anti-depressants and anti-anxiety medication, AI-boosted meditation and CBT techniques, neurological hardwiring, etc.? Could this end up being a net negative if this leads to people with access to these becoming much more complacent about the suffering of others? I’ve only done a very shallow dive into this so it would probably be very quick and scrappy
Great idea. I was wondering whether to write about this sort of angle from the opposite (pessimistic) view of if knowledge work becomes less common/disappears entirely and humans increasingly engage in less cognitively effortful forms of leisure whether this would need to be counteracted with forms of brain training. Arguably this may already be becoming an issue with people unable to commit their attention span to reading longer form text.