Thanks Ben, there’s a lot of food for thought here, most of which I basically agree with. I think I make all the same non-updates as you. As a staunch utilitarian in principle but longtime advocate of much more ‘normie’ behaviours for the EA movement, I have a few extra takes:
I should act on the assumption EAs aren’t more trustworthy than average
This seems like a mistake as written (though the clarifying text makes me think you didn’t mean it literally). In many contexts trustworthiness can be basically binary (for eg are you willing to open source your bank details among your local EA community?), but when interacting 1-1 or when designing incentive structures among groups of people, you can view it as a sliding scale—eg, ‘how much background checking would I want to do per resource I’m considering giving this person?’ In the latter case, it seems like a horrible overreaction to assume they’re no more trustworthy than average.
I would also lean away from SBF exceptionalism and the distinction of ‘a significant minority who seem more likely to defect or be deceptive than average’. While I also think the typical EA is more trustworthy than average, I neither think SBF was a monster, nor that those who interacted with him and ignored/failed to see warning signs were otherwise perfect, and I lean towards updating more that EAs in general rationalise and follow perverse incentives, albeit still less so than average.
EAs are also less competent than I thought, and have worse judgement of character and competence than I thought.
I agree with this, and think there are many pre-FTX examples suggesting it. But another plausible take would be that EAs are relatively homogenous, and so as a community have relatively large blind spots towards certain classes of exploitation, and low ability to assess competence beyond their key skills (as you allude to below). To the extent that that’s true, it seems more valuable to build the community towards providing those blind spots and developing other skills than to downgrade your assessment of EAs. Ie, though perhaps it’s similar in practice, view the non-EA world as more competent, rather than viewing EAs as less so.
I think this emphasis helps avoid the temptation to start the whole cycle again… ‘old EA was flawed, but this time we’re going to reinvent everything better than ever!’
All the above makes me feel tempted to negatively update on the community’s epistemics across the board
I would be more inclined to do so. The whole AI risk movement is predicated on the idea that intelligence generalises, so it seems somewhat self-effacing for the same group of people to say ‘we did this thing which was dumb, but we shouldn’t downgrade our estimate of our overall competence’.
I also have the sense—although I’m not sure how justifiably—that speculative EA ideas consist in long chains of causal reasoning which many people skip across from a sense of trust. But those ideas tend to be promoted by people with a financial- or status-based incentive to argue for interesting conclusions that require (funding for) more research. So if we are going to downgrade our trust in the average integrity of EAs, we should increase our credence that those ideas have been partially propagated due to such incentives.
Governance seems more important.
Strongly agree, but in particular I would like to see EA orgs look much more habitually outside of their social circle for hiring board members alike. ‘Value-alignment’ should be seen as a two-edged sword, not a prerequisite.
This suggests that media based outreach is going to be less effective going forward.
I argued elsewhere that the previous lack of media outreach was partly responsible for the high proportion of coverage being bad (since if you tell everyone who supports you not to talk to the media about you, all the people talking to the media about you are people who don’t support you, and that seems as true as ever to me. Obviously we should exercise caution, and maybe reconsider who the best people to talk to the media are, but shutting down outreach completely in the wake of FTX looks like an admission of guilt, and seems like a surefire way to extend the duration of bad publicity.
IMO the problem many people have had with EA is less its ideas and more its lack of humility, so if we can successfully update the latter going forward, it could allow a healthier and more productive engagement with the outside world.
In general the above thoughts make me more pro-community-building than you, since most of the problems seem fixable:
My estimate of the fraction of dangerous actors attracted has gone up, and my confidence in our ability to spot them has gone down, making the community less valuable.
This seems fixable by getting more people in who can see our blind spots. Ie by growing the community.
Tractability seems lower due to the worse brand, worse appeal and lower morale, and potential for negative feedback loops.
I agree the ‘brand’ needs serious reconsideration. I suspect treating it as a brand has caused many problems of its own—but the philosophy seems unlikely to suffer much lasting damage, and organising a movement around a philosophy seems to have much better historical precedent than organising one around a brand.
I honestly think semi-formally relaunching it with a new name, one which better admits people applying it to themselves without either sounding so hubristic or sounding like it’s integral to their identity. ‘Vegetarians’ seem like a good point of comparison here.
More speculatively, as of January, I feel more pessimistic about the community’s ability to not tear itself apart in the face of scandal and setbacks.
As arguably one of the tearers, I think this brought to the surfaces concerns some of us had been developing for years. If the brand-managers do actually update towards greater epistemic and social humility, better governance practices and a better appreciation for the input of both aligned and non-aligned crowds, then I would feel substantially more optimistic about the community’s response to future problems.
I’m also more concerned that community is simply unpleasant for many people to be part of.
Most community building to date seems to have focused on professional networking. Personally by far the most rewarding and productive EA events I’ve been to have been those that were about building connections organically in more social settings. I think if there were more emphasis on events feeling good, it would feel more like a community, more pleasant to be a part of, and less inclined to attract radicals. Its feeling more good would also make people who tended to make it feel bad more visible by contrast.
I’m less into the vision of EA as an identity / subculture / community, especially one that occupies all parts of your life
My above comments notwithstanding, I feel like the ‘not occupying all parts of your life’ idea is robustly good advice. But something can be both a welcoming community and non-all-consuming. The Quakers seem like a healthy comparison, for example.
I feel unsure if EA should become either more or less centralised, and suspect some of both might make sense.
I continue to wish for someone to try creating a network of small nonprofits, with enough redundancy to allow friendly competition but enough trust and careful enough alignment of incentives to allow high intra-network levels of support. This could turn out to be a terrible or a least hopelessly impractical vision, but if we think ‘a single body that speaks for & represents <the community>, but where there’s no community-wide membership or control’ approach is bad—which I also do—then it seems like a good time to test structural alternatives.
I feel unsure how to update about promotion of earning to give.
To the extent that we think people at EA orgs shared some responsibility for FTX, it seems like we should downgrade the value of working at EA orgs, and thus to upgrade the value of earning to give. But, given your own comments below about the extremely disproportionate value of billionaires, it seems like we should also continue to be mostly in favour of billionaire-creation. So I’m also unsure how to think about this—though I guess E2G is more consistent with epistemic humility than committing yourself to a particular organisation.
Thanks Ben, there’s a lot of food for thought here, most of which I basically agree with. I think I make all the same non-updates as you. As a staunch utilitarian in principle but longtime advocate of much more ‘normie’ behaviours for the EA movement, I have a few extra takes:
This seems like a mistake as written (though the clarifying text makes me think you didn’t mean it literally). In many contexts trustworthiness can be basically binary (for eg are you willing to open source your bank details among your local EA community?), but when interacting 1-1 or when designing incentive structures among groups of people, you can view it as a sliding scale—eg, ‘how much background checking would I want to do per resource I’m considering giving this person?’ In the latter case, it seems like a horrible overreaction to assume they’re no more trustworthy than average.
I would also lean away from SBF exceptionalism and the distinction of ‘a significant minority who seem more likely to defect or be deceptive than average’. While I also think the typical EA is more trustworthy than average, I neither think SBF was a monster, nor that those who interacted with him and ignored/failed to see warning signs were otherwise perfect, and I lean towards updating more that EAs in general rationalise and follow perverse incentives, albeit still less so than average.
I agree with this, and think there are many pre-FTX examples suggesting it. But another plausible take would be that EAs are relatively homogenous, and so as a community have relatively large blind spots towards certain classes of exploitation, and low ability to assess competence beyond their key skills (as you allude to below). To the extent that that’s true, it seems more valuable to build the community towards providing those blind spots and developing other skills than to downgrade your assessment of EAs. Ie, though perhaps it’s similar in practice, view the non-EA world as more competent, rather than viewing EAs as less so.
I think this emphasis helps avoid the temptation to start the whole cycle again… ‘old EA was flawed, but this time we’re going to reinvent everything better than ever!’
I would be more inclined to do so. The whole AI risk movement is predicated on the idea that intelligence generalises, so it seems somewhat self-effacing for the same group of people to say ‘we did this thing which was dumb, but we shouldn’t downgrade our estimate of our overall competence’.
I also have the sense—although I’m not sure how justifiably—that speculative EA ideas consist in long chains of causal reasoning which many people skip across from a sense of trust. But those ideas tend to be promoted by people with a financial- or status-based incentive to argue for interesting conclusions that require (funding for) more research. So if we are going to downgrade our trust in the average integrity of EAs, we should increase our credence that those ideas have been partially propagated due to such incentives.
Strongly agree, but in particular I would like to see EA orgs look much more habitually outside of their social circle for hiring board members alike. ‘Value-alignment’ should be seen as a two-edged sword, not a prerequisite.
I argued elsewhere that the previous lack of media outreach was partly responsible for the high proportion of coverage being bad (since if you tell everyone who supports you not to talk to the media about you, all the people talking to the media about you are people who don’t support you, and that seems as true as ever to me. Obviously we should exercise caution, and maybe reconsider who the best people to talk to the media are, but shutting down outreach completely in the wake of FTX looks like an admission of guilt, and seems like a surefire way to extend the duration of bad publicity.
IMO the problem many people have had with EA is less its ideas and more its lack of humility, so if we can successfully update the latter going forward, it could allow a healthier and more productive engagement with the outside world.
In general the above thoughts make me more pro-community-building than you, since most of the problems seem fixable:
This seems fixable by getting more people in who can see our blind spots. Ie by growing the community.
I agree the ‘brand’ needs serious reconsideration. I suspect treating it as a brand has caused many problems of its own—but the philosophy seems unlikely to suffer much lasting damage, and organising a movement around a philosophy seems to have much better historical precedent than organising one around a brand.
I honestly think semi-formally relaunching it with a new name, one which better admits people applying it to themselves without either sounding so hubristic or sounding like it’s integral to their identity. ‘Vegetarians’ seem like a good point of comparison here.
As arguably one of the tearers, I think this brought to the surfaces concerns some of us had been developing for years. If the brand-managers do actually update towards greater epistemic and social humility, better governance practices and a better appreciation for the input of both aligned and non-aligned crowds, then I would feel substantially more optimistic about the community’s response to future problems.
Most community building to date seems to have focused on professional networking. Personally by far the most rewarding and productive EA events I’ve been to have been those that were about building connections organically in more social settings. I think if there were more emphasis on events feeling good, it would feel more like a community, more pleasant to be a part of, and less inclined to attract radicals. Its feeling more good would also make people who tended to make it feel bad more visible by contrast.
My above comments notwithstanding, I feel like the ‘not occupying all parts of your life’ idea is robustly good advice. But something can be both a welcoming community and non-all-consuming. The Quakers seem like a healthy comparison, for example.
I continue to wish for someone to try creating a network of small nonprofits, with enough redundancy to allow friendly competition but enough trust and careful enough alignment of incentives to allow high intra-network levels of support. This could turn out to be a terrible or a least hopelessly impractical vision, but if we think ‘a single body that speaks for & represents <the community>, but where there’s no community-wide membership or control’ approach is bad—which I also do—then it seems like a good time to test structural alternatives.
To the extent that we think people at EA orgs shared some responsibility for FTX, it seems like we should downgrade the value of working at EA orgs, and thus to upgrade the value of earning to give. But, given your own comments below about the extremely disproportionate value of billionaires, it seems like we should also continue to be mostly in favour of billionaire-creation. So I’m also unsure how to think about this—though I guess E2G is more consistent with epistemic humility than committing yourself to a particular organisation.