Mental health support for those working on AI risks and policy?
During the numerous projects I work on relating to AI risks, policies, and future threats/scenarios, I speak to a lot of people who bring exposed to issues of catastrophic and existential nature for the first time (or grappling with them for the first time in detail). This combined with the likelihood that things will get worse before they better, makes me frequently wonder: are we doing enough around mental health support?
Things that I don’t know exist but feel they should. Some may sound OTT but I expect you could fund all of these for c.$300k, which relative to the amount being spent in the sector as a whole, is tiny in exchange for resilience of the talent we’re building.
Structured proactive therapy or mental health resilience sessions tied into Fellowship programs.
Regular in-built mental health support within organisations dealing with AI risks etc., particularly around helping to anti-catastrophise (there are several threat models which are only catastrophically bad if many, many, many things go down and happen counter to regular incentives—but are very worrying to people (especially new entrants to the field) - it feels support to help prioritise the risks would help both outcomes and mental health.
Free to use services for mental health support for those in the field.
Mental health support for those working on AI risks and policy?
During the numerous projects I work on relating to AI risks, policies, and future threats/scenarios, I speak to a lot of people who bring exposed to issues of catastrophic and existential nature for the first time (or grappling with them for the first time in detail). This combined with the likelihood that things will get worse before they better, makes me frequently wonder: are we doing enough around mental health support?
Things that I don’t know exist but feel they should. Some may sound OTT but I expect you could fund all of these for c.$300k, which relative to the amount being spent in the sector as a whole, is tiny in exchange for resilience of the talent we’re building.
Structured proactive therapy or mental health resilience sessions tied into Fellowship programs.
Regular in-built mental health support within organisations dealing with AI risks etc., particularly around helping to anti-catastrophise (there are several threat models which are only catastrophically bad if many, many, many things go down and happen counter to regular incentives—but are very worrying to people (especially new entrants to the field) - it feels support to help prioritise the risks would help both outcomes and mental health.
Free to use services for mental health support for those in the field.