(Dashing this off quickly, so some of this may be inelegantly stated.)
I appreciate this response, and I think that elements of it apply to many other risks an EA could take, including business ventures and work on charitable causes that may be high-return but carry a significant risk of major public backlash or other bad consequences.
Even if we have a collective reason to seek very good results even at the cost of taking on risk (slowly diminishing marginal utility, as noted in the post), and even if the community can internally tolerate a few individual disasters (because we have collective resources to fall back on), we get a lot of value from having a reputation for wisdom, caution, and common sense (especially given the natural weirdness of so many core EA ideas).
This doesnβt mean we should necessarily avoid any particular risk, but it seems important for would-be risk-takers to consider, even if they are personally open to bearing a lot of risk for the sake of EA goals.
(Dashing this off quickly, so some of this may be inelegantly stated.)
I appreciate this response, and I think that elements of it apply to many other risks an EA could take, including business ventures and work on charitable causes that may be high-return but carry a significant risk of major public backlash or other bad consequences.
Even if we have a collective reason to seek very good results even at the cost of taking on risk (slowly diminishing marginal utility, as noted in the post), and even if the community can internally tolerate a few individual disasters (because we have collective resources to fall back on), we get a lot of value from having a reputation for wisdom, caution, and common sense (especially given the natural weirdness of so many core EA ideas).
This doesnβt mean we should necessarily avoid any particular risk, but it seems important for would-be risk-takers to consider, even if they are personally open to bearing a lot of risk for the sake of EA goals.