I want to emphasise that Background section 5 is the OP saying, “The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to … “EA leaders” … did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse”.
In other words, I don’t think Ben is “suggesting” EA leaders are partly responsible. I think Ben is saying “I think TIME is claiming they are? Well, here’s my response...”
(But I’m glad you ask these questions; the apparent prevalence and persistence of hindsight bias in the EA movement today has been one of the biggest updates for me in recent months. I wondered if it might be because EA had generally been selecting for something like ‘smart’ but not ‘rationalist,’ but I’m not sure that the rationalists have fared much better and I think people outside both communities do tend to fare better in relation to EA events. My latest theory is just that I’d underestimated the allure of gossip, public shaming, witch hunts etc, and how easy it is to stir things up in an online world. Maybe I should read some of your work—could be a grounding counterweight to the lofty rationalist and altruistic ideals I have for myself and this community!)
Ubuntu—yes, regarding the underestimated ‘allure of gossip, public shaming, witch hunts, etc’, I think the moral psychology at work in these things runs so deep that even the most rationalist & clever EAs can be prone to them—and then we can sometimes deceive ourselves about what’s really going on.
However, the moral psychology around public shaming evolved for some good adaptive reasons, to help deter bad actors, solve coordination problems, enforce social norms, virtue-signal our values, internalize self-control heuristics, etc. So I don’t think we should dismiss them entirely. (My 2019 book ‘Virtue Signaling’ addresses some of these issues.)
Indeed, leveraging the power of these ‘darker’ facets of moral psychology (e.g. public shaming) has arguably been crucial in many effective moral crusades throughout history, e.g. against torture, slavery, sexism, racism, nuclear brinksmanship, chemical weapons, etc. They may still prove useful in fighting against AI X-risk...
I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I’ll call “shame” to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking.
To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get.
By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don’t pan out often end up with shame that may not be morally justified.
Ubuntu—thanks for the correction; you’re right; I misread that section as reflecting Ben’s views, rather than as his steel-manning of TIME’s views. Oops.
So, please take my reply as a critique of TIME’s view, rather than as a critique of Ben’s view.
I want to emphasise that Background section 5 is the OP saying, “The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to … “EA leaders” … did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse”.
In other words, I don’t think Ben is “suggesting” EA leaders are partly responsible. I think Ben is saying “I think TIME is claiming they are? Well, here’s my response...”
(But I’m glad you ask these questions; the apparent prevalence and persistence of hindsight bias in the EA movement today has been one of the biggest updates for me in recent months. I wondered if it might be because EA had generally been selecting for something like ‘smart’ but not ‘rationalist,’ but I’m not sure that the rationalists have fared much better and I think people outside both communities do tend to fare better in relation to EA events. My latest theory is just that I’d underestimated the allure of gossip, public shaming, witch hunts etc, and how easy it is to stir things up in an online world. Maybe I should read some of your work—could be a grounding counterweight to the lofty rationalist and altruistic ideals I have for myself and this community!)
Ubuntu—yes, regarding the underestimated ‘allure of gossip, public shaming, witch hunts, etc’, I think the moral psychology at work in these things runs so deep that even the most rationalist & clever EAs can be prone to them—and then we can sometimes deceive ourselves about what’s really going on.
However, the moral psychology around public shaming evolved for some good adaptive reasons, to help deter bad actors, solve coordination problems, enforce social norms, virtue-signal our values, internalize self-control heuristics, etc. So I don’t think we should dismiss them entirely. (My 2019 book ‘Virtue Signaling’ addresses some of these issues.)
Indeed, leveraging the power of these ‘darker’ facets of moral psychology (e.g. public shaming) has arguably been crucial in many effective moral crusades throughout history, e.g. against torture, slavery, sexism, racism, nuclear brinksmanship, chemical weapons, etc. They may still prove useful in fighting against AI X-risk...
I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I’ll call “shame” to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking.
To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get.
By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don’t pan out often end up with shame that may not be morally justified.
Yeah that’s a fair point to raise. I guess I’m just lamenting that these facets aren’t refined enough to catch less false positives by this point.
Ubuntu—thanks for the correction; you’re right; I misread that section as reflecting Ben’s views, rather than as his steel-manning of TIME’s views. Oops.
So, please take my reply as a critique of TIME’s view, rather than as a critique of Ben’s view.