Note that I discuss some takeaways and potential lessons learned in this interview.
Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:
The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).
I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”
I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).
Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative consequences of things like the Future Fund regranting program; some of it was probably a less rational response to peer pressure. I now feel the case for caution and deliberation in most actions is quite strong—partly because the substantive situation has changed (effective altruism is now enough in the spotlight, and controversial enough, that the costs of further problems seem higher than they did before).
On this front, I’ve updated a bit toward my previous self, and more so toward Alexander’s style, in terms of wanting to weigh both explicit risks and vague misgivings significantly before taking notable actions. That said, I think balance is needed and this is only a fairly moderate update, partly because I didn’t update enormously in the other direction before. I think I’m still overall more in favor of moving quickly than I was ~5 years ago, for a number of reasons. In any case I don’t expect there to be a dramatic visible change on this front in terms of Open Philanthropy’s grantmaking, though it might be investing more effort in improving functions like community health.
Having seen the EA brand under the spotlight, I now think it isn’t a great brand for wide public outreach. It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable? I still think of myself as an effective altruist and think we should continue to have an EA brand for attracting the sort of people (like myself) who want to put a lot of dedicated, intensive time into thinking about what issues they can work on to do the most good; but I’m not sure this is the brand that will or should attract most of the people who can be helpful on key causes. I think it’s probably good to focus more on building communities and professional networks around specific causes (e.g., AI risk, biorisk, animal welfare, global health) relative to building them around “EA.”
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well. It’s hard to make a clean quantitative statement about how this will change Open Philanthropy’s actions, but it’s a factor in how we recently ranked grants. I think it’ll be important to do quite a bit more thinking about this (and in particular, to gather more data along these lines) in the longer run.
Thanks for writing this up. I agree with most of these points. However, not with the last one:
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well.
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA’s past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things. In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status. I don’t expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher—unless the metrics change significantly.
Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA’s onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.
Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:
Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don’t find the relevant post.)
Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person. CEA’s community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA’s team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven’t heard of them in the first place.
Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups’ work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.) For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.
Thoughts, disagreement?
(Disclaimer on conflict of interest: I’m currently EA Berlin’s Community Coordinator and fundraising to turn that into a paid role.)
I have not thought much about this and do not know how far this applies to others (might be worth running a survey) but I very much appreciate the EA community. This is because I am somewhat cause agnostic but have a skillset that might be applied to different causes. Hence, it is very valuable for me to have some community that ties together all these different causes as it makes it easier for me to find work that I might have a good fit for helping out with. In a scenario where EA did not exist, only separate causes (although I think Holden Karnofsky only meant to make less investments in EA, not abandoning the project altogether) I would need to keep updated on perhaps 10 or more separate communities in order to come by relevant opportunities to help.
Thanks for the update! I agree with Nathan that this deserves its own post.
Re your last point, I always saw SBF/FTX (when things were going well) as a success story relating to E2G/billionaire philanthropy/maximisation/hardcore utilitarianism/risk-taking/etc. I feel these are the factors that make SBF’s case distinctive, and the connection to community building is more tenuous.
This being the case, the whole thing has updated me away from those things, but it hasn’t really updated my view on community building (other than that we should be doing things more in line with Ord’s opening address at EAG Bay Area).
I’m surprised you see things differently and would be interested in hearing why that is :)
Maybe I’m just biased because I’m a professional community builder!
It feels off to me that this is a forum reply. Seems like it is important enough that it should be a post and then showed to people in accordance with that.
Could you maybe elaborate on what you mean by a ‘bad actor’? There’s some part of me that feels nervous about this as a framing, at least without further specification—like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with “hard-core utilitarianism”, which I’d think wouldn’t be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.
To give a rough idea, I basically mean anyone who is likely to harm those around them (using a common-sense idea of doing harm) and/or “pollute the commons” by having an outsized and non-consultative negative impact on community dynamics. It’s debatable what the best warning signs are and how reliable they are.
(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.
Thanks so much for these reflections. Would you consider saying more about which other actions seem most promising to you, beyond articulating a robust case against “hard-core utilitarianism” and improving the community’s ability to identify and warn about bad actors? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.
I laughed as I agreed about the “punchable” comment. Certainly, as a non STEM individual much of EA seems punchable to me, SBF’s face in particular should inspire a line of punching bags embroidered with it.
But for this to lead you to downgrade EA community building seems like wildly missing the point, which is to be less punchable, ie. more “normal”, “likable”, “relatable to average people”. I say this from huge experience in movement building...the momentum and energy a movement like EA creates is tremendous and may even lead to saving the world, and it is simply a movement that has reached a maturation way point that uncovers common normal problems like when you show up to your first real job and discover your college kid cultural mindset needs an update.
The problem is not EA community building, it is getting seduced by billionaire/elite/elon culture and getting sucked into it like Clinton hanging out with Epstein...Oops. Don’t reduce growth energy to a rare energetic movement, just fix whatever sucked people in towards the big money. Said with much love and respect for all you and the early EA pioneers have done. I’ve seen movements falter, trip and fall...don’t do that. Learn and adjust, but do not pull back. EA community building is literally the living body, you can’t stop feeding it.
Here’s a followup with some reflections.
Note that I discuss some takeaways and potential lessons learned in this interview.
Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:
The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).
I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”
I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).
Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative consequences of things like the Future Fund regranting program; some of it was probably a less rational response to peer pressure. I now feel the case for caution and deliberation in most actions is quite strong—partly because the substantive situation has changed (effective altruism is now enough in the spotlight, and controversial enough, that the costs of further problems seem higher than they did before).
On this front, I’ve updated a bit toward my previous self, and more so toward Alexander’s style, in terms of wanting to weigh both explicit risks and vague misgivings significantly before taking notable actions. That said, I think balance is needed and this is only a fairly moderate update, partly because I didn’t update enormously in the other direction before. I think I’m still overall more in favor of moving quickly than I was ~5 years ago, for a number of reasons. In any case I don’t expect there to be a dramatic visible change on this front in terms of Open Philanthropy’s grantmaking, though it might be investing more effort in improving functions like community health.
Having seen the EA brand under the spotlight, I now think it isn’t a great brand for wide public outreach. It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable? I still think of myself as an effective altruist and think we should continue to have an EA brand for attracting the sort of people (like myself) who want to put a lot of dedicated, intensive time into thinking about what issues they can work on to do the most good; but I’m not sure this is the brand that will or should attract most of the people who can be helpful on key causes. I think it’s probably good to focus more on building communities and professional networks around specific causes (e.g., AI risk, biorisk, animal welfare, global health) relative to building them around “EA.”
I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well. It’s hard to make a clean quantitative statement about how this will change Open Philanthropy’s actions, but it’s a factor in how we recently ranked grants. I think it’ll be important to do quite a bit more thinking about this (and in particular, to gather more data along these lines) in the longer run.
Thanks for writing this up. I agree with most of these points. However, not with the last one:
If anything, I think the dangers and pitfalls of optimization you mention warrant different community building, not less. Specifically, I see two potential dangers to pulling resources out of community building:
Funded community builders would possibly have even stronger incentives to prioritize community growth over sustainable planning, accountability infrastructure, and community health. To my knowledge, CEA’s past funding policy incentivized community builders to goodhart on acquiring new talent and funds, at the cost of building sustainable network and structural capital, and at the cost of fostering constructive community norms and practices. As long as one avoided to visibly damage the EA brand or turn the very most talented individuals off, it just was financially unreasonable to pay much attention to these things.
In other words, the financial incentives so far may have forced community builders into becoming the hard-core utilitarians you are concerned about. And accordingly, they were forced to be role models of hard-core utilitarianism for those they built community for. This may have contributed to EA orthodoxy pre-FTX collapse, where it seemed to me that hard-core utilitarianism was generally considered synonymous to value-alignedness/high status.
I don’t expect this problem to get better if the bar for getting/remaining funded as a community builder gets higher—unless the metrics change significantly.
Access to informal networks would become even more crucial than it already is. If we take money out of community building, we apply optimization pressure away from welcomingness/having low entry barriers to the community. Even more of EA’s onboarding and mentorship than is already the case will be tied to informal networks. Junior community members will experience even stronger pressure to try and get invited to the right parties, impress the right people, to become friends and lovers with those who have money and power.
Accordingly, I suspect that the actual answer here is more professionalization, and into a different direction. Specifically:
Turning EA community building from a career stepstone into a long-term career, with proper training, financial security, and everything. (CEA already thought of this of course; I don’t find the relevant post.)
Having more (and more professionalized) community health infrastructure in national and local groups. For example, point people that community members actually know and can talk to in-person.
CEA’s community health team is important, and for all I know, they are doing a fairly impressive job. But I think the bar for reaching out to community health people could be much lower than it currently is. For many community members, CEA’s team are just strangers on the internet, and I suspect that all too many new community members (i.e. those most vulnerable to power abuse/harassment/peer pressure) haven’t heard of them in the first place.
Creating stronger accountability structures in national and local groups, like a board of directors that oversees larger local groups’ work without being directly involved in it. (For example, EA Munich recently switched to a board structure, and we are working on that in Berlin ourselves.)
For this to happen, we would need more experienced and committed people in community building. While technically, a board of directors can be staffed by volunteers entirely, withdrawing funding and prestige from EA community building will make it more difficult to get the necessary number of sufficiently experienced and committed people enrolled.
Thoughts, disagreement?
(Disclaimer on conflict of interest: I’m currently EA Berlin’s Community Coordinator and fundraising to turn that into a paid role.)
I have not thought much about this and do not know how far this applies to others (might be worth running a survey) but I very much appreciate the EA community. This is because I am somewhat cause agnostic but have a skillset that might be applied to different causes. Hence, it is very valuable for me to have some community that ties together all these different causes as it makes it easier for me to find work that I might have a good fit for helping out with. In a scenario where EA did not exist, only separate causes (although I think Holden Karnofsky only meant to make less investments in EA, not abandoning the project altogether) I would need to keep updated on perhaps 10 or more separate communities in order to come by relevant opportunities to help.
Thanks for the update! I agree with Nathan that this deserves its own post.
Re your last point, I always saw SBF/FTX (when things were going well) as a success story relating to E2G/billionaire philanthropy/maximisation/hardcore utilitarianism/risk-taking/etc. I feel these are the factors that make SBF’s case distinctive, and the connection to community building is more tenuous.
This being the case, the whole thing has updated me away from those things, but it hasn’t really updated my view on community building (other than that we should be doing things more in line with Ord’s opening address at EAG Bay Area).
I’m surprised you see things differently and would be interested in hearing why that is :)
Maybe I’m just biased because I’m a professional community builder!
Thanks for writing this.
It feels off to me that this is a forum reply. Seems like it is important enough that it should be a post and then showed to people in accordance with that.
Hey Holden,
Thanks for these reflections!
Could you maybe elaborate on what you mean by a ‘bad actor’? There’s some part of me that feels nervous about this as a framing, at least without further specification—like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with “hard-core utilitarianism”, which I’d think wouldn’t be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.
To give a rough idea, I basically mean anyone who is likely to harm those around them (using a common-sense idea of doing harm) and/or “pollute the commons” by having an outsized and non-consultative negative impact on community dynamics. It’s debatable what the best warning signs are and how reliable they are.
Thoughts on “maximisation is perilous”:
(1) We could put more emphasis on the idea of “two-thirds utilitarianism”.
(2) I expect we could come up with a better name for two-thirds utilitarianism and a snappier way of describing the key thought. Deep pragmatism might work.
(I made these webpages a couple days after the FTX collapse. Buying domains is cheaper than therapy…)
Thanks so much for these reflections. Would you consider saying more about which other actions seem most promising to you, beyond articulating a robust case against “hard-core utilitarianism” and improving the community’s ability to identify and warn about bad actors? For the reasons I gave here, I think it would be valuable for leaders in the EA community to be talking much more concretely about opportunities to reduce the risk that future efforts inspired by EA ideas might cause unintended harm.
I laughed as I agreed about the “punchable” comment. Certainly, as a non STEM individual much of EA seems punchable to me, SBF’s face in particular should inspire a line of punching bags embroidered with it.
But for this to lead you to downgrade EA community building seems like wildly missing the point, which is to be less punchable, ie. more “normal”, “likable”, “relatable to average people”. I say this from huge experience in movement building...the momentum and energy a movement like EA creates is tremendous and may even lead to saving the world, and it is simply a movement that has reached a maturation way point that uncovers common normal problems like when you show up to your first real job and discover your college kid cultural mindset needs an update.
The problem is not EA community building, it is getting seduced by billionaire/elite/elon culture and getting sucked into it like Clinton hanging out with Epstein...Oops. Don’t reduce growth energy to a rare energetic movement, just fix whatever sucked people in towards the big money. Said with much love and respect for all you and the early EA pioneers have done. I’ve seen movements falter, trip and fall...don’t do that. Learn and adjust, but do not pull back. EA community building is literally the living body, you can’t stop feeding it.