Fwiw, I think we have different perspectives here—outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.
Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.
(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)
Yeah, I’m not trying to stake out a claim on what the biggest risks are.
I’m saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes—risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says “we’re actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So much so that we’re renaming ourselves as ‘Risk Management and cool meta projects’”. And to repeat, we (impartial onlookers) think that many other teams have been capable of running impactful meta projects. We might start to wonder whether team A is losing their focus, and losing track of the most pertinent facts about the strategic situation.
My understanding was that community health to some extent carries the can for catastrophe management, along with other parts of CEA and EA orgs. Is this right? I don’t know whether people within CEA think anyone within CEA bears any responsibility for which parts of the past year’s catastrophes. (I don’t know as in I genuinely don’t know—it’s not a leading statement.) Per Ryan’s comment, the actions you have announced here don’t seem at all appropriate given the past year’s catastrophes.
I imagine that, for a number of reasons, it’s not a good idea to put out an official, full CHSP List of Reasonably-Specific, Major-to-Catastrophic Risks complete with current and potential evaluation and mitigation measures. And your inability to do so likely makes it difficult to fully brief the community about your past, current, and potential efforts to manage those kinds of risks.
My guess is that a sizable fraction of the major-to-catastrophic risks center around a fairly modest number of key leaders, donors, and organizations. If that’s so, there might be benefit to more specifically communicating CHSP’s awareness of that risk cluster and high-level details about possible strategies to improve performance in that specific cluster (or to transition responsibility for that cluster elsewhere).
Fwiw, I think we have different perspectives here—outside of epistemics, everything on that list is there precisely because we think it’s a potential source of some of the biggest risks. It’s not always clear where risks are going to come from, so we look at a wide range of things, but we are in fact trying to be on the lookout for those big risks. Thanks for flagging it doesn’t seem like we are; I’m not sure if this comes from miscommunication or a disagreement about where big risks come from.
Maybe another place of discrepancy is that we primarily think of ourselves as looking for where high-impact gaps are, places where someone should be doing something but no one is, and risks are a subset of that but not the entirety.
(To be clear I also agree with Julia that it’s very plausible EA should have more capacity on this)
Yeah, I’m not trying to stake out a claim on what the biggest risks are.
I’m saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes—risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says “we’re actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So much so that we’re renaming ourselves as ‘Risk Management and cool meta projects’”. And to repeat, we (impartial onlookers) think that many other teams have been capable of running impactful meta projects. We might start to wonder whether team A is losing their focus, and losing track of the most pertinent facts about the strategic situation.
My understanding was that community health to some extent carries the can for catastrophe management, along with other parts of CEA and EA orgs. Is this right? I don’t know whether people within CEA think anyone within CEA bears any responsibility for which parts of the past year’s catastrophes. (I don’t know as in I genuinely don’t know—it’s not a leading statement.) Per Ryan’s comment, the actions you have announced here don’t seem at all appropriate given the past year’s catastrophes.
I imagine that, for a number of reasons, it’s not a good idea to put out an official, full CHSP List of Reasonably-Specific, Major-to-Catastrophic Risks complete with current and potential evaluation and mitigation measures. And your inability to do so likely makes it difficult to fully brief the community about your past, current, and potential efforts to manage those kinds of risks.
My guess is that a sizable fraction of the major-to-catastrophic risks center around a fairly modest number of key leaders, donors, and organizations. If that’s so, there might be benefit to more specifically communicating CHSP’s awareness of that risk cluster and high-level details about possible strategies to improve performance in that specific cluster (or to transition responsibility for that cluster elsewhere).