I’m an experienced policy advisor currently living in New Zealand.
S.E. Montgomery
Thanks for your response. On reflection, I don’t think I said what I was trying to say very well in the paragraph you quoted, and I agree with what you’ve said.
My intent was not to suggest that Will or other FTX future fund advisors were directly involved (or that it’s reasonable to think so), but rather that there may have been things the advisors chose to ignore, such as Kerry’s mention of Sam’s unethical behaviour in the past. Thus, we might think that either Sam was incredibly charismatic and good at hiding things, or we might think there actually were some warning signs and those involved with him showed poor judgement of his character (or maybe some mix of both).
I am glad you felt okay to post this—being able to criticise leadership and think critically about the actions of the people we look up to is extremely important.
I personally would give Will the benefit of the doubt of his involvement in/knowledge about the specific details of the FTX scandal, but as you pointed out the fact remains that he and SBF were friends going back nearly a decade.
I also have questions about Will Macaskill’s ties with Elon Musk, his introduction of SBF to Elon Musk, his willingness to help SBF put up to 5 billion dollars towards the acquisition of Twitter alongside Musk, and the lack of engagement with the EA community about these actions. We talk a lot about being effective with our dollars and there are so many debates around how to spend even small amounts of money (eg. at EA events or on small EA projects), but it appears that helping SBF put up to 5 billion towards Twitter to buy in with a billionaire who recently advocated voting for the Republican party in the midterms didn’t require that same level of discussion/evaluation/scrutiny. (I understand that it wasn’t Will’s money and possibly SBF couldn’t have been talked into putting it towards other causes instead, but Will still made the introduction nonetheless.)
I love this! Thanks for sharing :)
Red-teaming contest: demographics and power structures in EA
Thanks Julia; this is a really insightful post. I will make sure to use it if anyone in the EA community asks me questions related to community health/the process for complaints in the future.
One of the things I’m curious about is how you see the balance of these trade-offs:
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally Don’t let people use EA to gain social status that they’ll use to do more bad stuff Take the talent bottleneck seriously; don’t hamper hiring / projects too much Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact It feels like CEA’s default is to be overly cautious and tread lightly in situations where someone is accused of bad behaviour. (Ie. if ‘cautious action’ vs ‘rash action’ is a metric here, I would think that CEA would sit considerably more on the cautious side.) This is quite understandable, but I wonder how you think about the risk of being too slow to condemn certain behaviours?
For example, I could imagine situations where something bad happens, and both the accuser and the accused contribute valuable work to the community. However, due to CEA’s response leaning towards the side of caution, the accuser walks away feeling like their complaint hasn’t been taken seriously enough/that CEA should have been quicker to act, and possibly feels less inclined to be involved in EA in the future. Do you feel like this has happened and, if so, how do you think about these types of situations?
Good point—an aspect of this that I didn’t expand on a lot is that it’s really important for organisers to do things that they enjoy doing and this helps it to not feel forced.
On the other hand, I have had conversations with our group about maximising time spent together as a way to build better friendships and people generally reacted to this idea better than I imagined! I think sharing your intentions to maximise friendship-building activities will feel robotic to some people but others may appreciate the thought and effort behind it.
Community builders should focus more on supporting friendships within their group
Thanks for posting this—it was an interesting and thoughtful read for me as a community builder.
This summarised some thoughts I’ve had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:
are really good at working on detailed problems with defined parameters (eg. software developers)
are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)
These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go do a PhD in biorisk,” or “There’s a job open at DeepMind, you should apply for it,” and the person will probably go for it.
On the other hand, there are a whole range of people who don’t have the above traits, and instead have one (or more) of the following traits:
prefer broader, messier problems (eg. policy analysts) and are not great at working on detailed problems within defined parameters (or maybe less interested in these types of problems)
are somewhat open to hearing things that challenge or further their existing knowledge, but might not continue to engage if they initially find something off-putting
can be persuaded to accept new arguments, but are more likely to push back, hold onto scepticism for longer, and won’t accept something simply because it is the commonly held view, even if the arguments for it are generally good
These people are harder for community builders to deal with as there is not a clear forged pathway within EA, and they might also be less convinced by the pathways that do exist. (For example, maybe if someone has these traits a community builder might push them towards working in AI policy, but they might not be as convinced that working in AI policy is important, or that they personally can make a big difference in the field, and they won’t be as easily persuaded to apply for jobs in AI policy.) These people might also feel a bit lost when EAs try to push them towards high-impact work—they see the world in greyer terms, they carry more uncertainty, and they are more hesitant to go “all in” on a specified career path.
I think there is a great deal of value that can be derived if EA can find ways to engage with people with these traits, and I also think people with at least one of these traits are probably more likely to fall into the categories that you highlighted in your post – government/policy experts, managers, cause prioritizers (can’t think of a better title here), entrepreneurs, and people with high social/emotional skills. These are people who like big, messy, broad problems and who may generally take more time to accept new ideas and arguments.
In my community-building role, I want to attract and keep more of these people! I don’t have good answers for how to do this (yet), but I think being aware of the issue and trying to figure out some possible ways in which more people with these skills can be brought on board (as well as trying to figure out why EA might be off-putting to some of these people) is a great start.
I’m not sure I agree with this. I agree that compassion is a good default, but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis, which will include many people in the ‘Dank EA Memes’ Facebook group. Humour can be a coping mechanism which will make some people feel better about bad situations:
Maybe there is a way to use humour in a way that feels kinder, but I’ve personally yet to see anything since the FTX crisis started that could be defined as “compassionate” but also that made me laugh as much as those memes did.