I should note that I don’t see a stronger focus on character as the only thing we should be doing to improve effective altruism! Indeed, I don’t even think it is the most important improvement. There have been many other suggestions for improving institutions, governance, funding, and culture in EA that I’m really excited about. I focused on character (and decision procedures) in my talk because it was a topic I hadn’t seen much in online discussions about what to improve, because I have some distinctive expertise to impart, and because it is something that everyone in EA can work on.
I had the same impression, that a big picture view talk was expected but that the actual talk was focused on a single issue and in a fairly philosophical framing.
So I thought it was an excellent talk but it still gave me a strange feeling as the dominant and first EAG response to what happened (I know this made sense given other sessions covering other aspects, but I am unsure it will be perceived this way (one slice of the puzzle) given its keynote character).
I want to second this being a missed opportunity as being a missed opportunity to talk about wider issues, including governance (I tweeted so at the time). Listeners are likely to interpret, from your focus on character, and given your position as a leading EA speaking on the most prominent platform in EA—the opening talk at EAG—that this is all effective altruists should think about. But we don’t try to stop crimes just by encouraging people to have good character. And, if the latest Time article is to be believed, there was lots of evidence of SBF’s bad character, but this seemingly wasn’t sufficient to avert or mitigate disaster.
I still find it surprising and disappointing that there has been no substantive public discussion of governance reform from EA leaders (I keep asking people to point me to some, but no one has!). At the very least you’d have expected someone to do the normal academic thing of “we considered all these options, but we ruled them out, which is why we’re sticking with the status quo”.
Listeners are likely to interpret, from your focus on character, and given your position as a leading EA speaking on the most prominent platform in EA—the opening talk at EAG—that this is all effective altruists should think about.
Really? I don’t think I’ve ever encountered someone interpreting the topic of an EAG opening talk as being “all EAs should think about”.
Maybe I should have phrased what I’d said somewhat differently, but I expect EAs to very heavily take their cues from what established community leaders say, particularly when they speak in the ‘prime time’ slots.
I value the “it is something that everyone in EA can work on“-sentiment.
Particularily in these times, I think it is excellent to find things that (1) seem robustly good and (2) we can broadly agree on as a community to do more of. It can help alleviate feelings of powerlessness (and help with this is, I believe, one of the things we need.)
I’m really grateful that you gave this address, especially with the addition of this comment. Would you be willing to say more about which other suggestions for improvement you would be excited to see adopted in light of the FTX collapse and other recent events? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.
I should note that I don’t see a stronger focus on character as the only thing we should be doing to improve effective altruism! Indeed, I don’t even think it is the most important improvement. There have been many other suggestions for improving institutions, governance, funding, and culture in EA that I’m really excited about. I focused on character (and decision procedures) in my talk because it was a topic I hadn’t seen much in online discussions about what to improve, because I have some distinctive expertise to impart, and because it is something that everyone in EA can work on.
This feels like a missed opportunity.
My sense is that this was an opportunity to give a “big picture view” rather than note a particular underrated aspect.
If you think there were more important improvements, why not say them, at least as context, in one of the largest forums on this topic?
Thanks for your work :)
I had the same impression, that a big picture view talk was expected but that the actual talk was focused on a single issue and in a fairly philosophical framing.
So I thought it was an excellent talk but it still gave me a strange feeling as the dominant and first EAG response to what happened (I know this made sense given other sessions covering other aspects, but I am unsure it will be perceived this way (one slice of the puzzle) given its keynote character).
I want to second this being a missed opportunity as being a missed opportunity to talk about wider issues, including governance (I tweeted so at the time). Listeners are likely to interpret, from your focus on character, and given your position as a leading EA speaking on the most prominent platform in EA—the opening talk at EAG—that this is all effective altruists should think about. But we don’t try to stop crimes just by encouraging people to have good character. And, if the latest Time article is to be believed, there was lots of evidence of SBF’s bad character, but this seemingly wasn’t sufficient to avert or mitigate disaster.
I still find it surprising and disappointing that there has been no substantive public discussion of governance reform from EA leaders (I keep asking people to point me to some, but no one has!). At the very least you’d have expected someone to do the normal academic thing of “we considered all these options, but we ruled them out, which is why we’re sticking with the status quo”.
Really? I don’t think I’ve ever encountered someone interpreting the topic of an EAG opening talk as being “all EAs should think about”.
Maybe I should have phrased what I’d said somewhat differently, but I expect EAs to very heavily take their cues from what established community leaders say, particularly when they speak in the ‘prime time’ slots.
I value the “it is something that everyone in EA can work on“-sentiment.
Particularily in these times, I think it is excellent to find things that (1) seem robustly good and (2) we can broadly agree on as a community to do more of. It can help alleviate feelings of powerlessness (and help with this is, I believe, one of the things we need.)
This seems to be one of those things. Thanks!
I’m really grateful that you gave this address, especially with the addition of this comment. Would you be willing to say more about which other suggestions for improvement you would be excited to see adopted in light of the FTX collapse and other recent events? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.