I’m an experienced policy advisor currently living in New Zealand.
S.E. Montgomery
Community builders should focus more on supporting friendships within their group
Good point—an aspect of this that I didn’t expand on a lot is that it’s really important for organisers to do things that they enjoy doing and this helps it to not feel forced.
On the other hand, I have had conversations with our group about maximising time spent together as a way to build better friendships and people generally reacted to this idea better than I imagined! I think sharing your intentions to maximise friendship-building activities will feel robotic to some people but others may appreciate the thought and effort behind it.
Thanks Julia; this is a really insightful post. I will make sure to use it if anyone in the EA community asks me questions related to community health/the process for complaints in the future.
One of the things I’m curious about is how you see the balance of these trade-offs:
Encourage the sharing of research and other work, even if the people producing it have done bad stuff personally Don’t let people use EA to gain social status that they’ll use to do more bad stuff Take the talent bottleneck seriously; don’t hamper hiring / projects too much Take culture seriously; don’t create a culture where people can predictably get away with bad stuff if they’re also producing impact It feels like CEA’s default is to be overly cautious and tread lightly in situations where someone is accused of bad behaviour. (Ie. if ‘cautious action’ vs ‘rash action’ is a metric here, I would think that CEA would sit considerably more on the cautious side.) This is quite understandable, but I wonder how you think about the risk of being too slow to condemn certain behaviours?
For example, I could imagine situations where something bad happens, and both the accuser and the accused contribute valuable work to the community. However, due to CEA’s response leaning towards the side of caution, the accuser walks away feeling like their complaint hasn’t been taken seriously enough/that CEA should have been quicker to act, and possibly feels less inclined to be involved in EA in the future. Do you feel like this has happened and, if so, how do you think about these types of situations?
I love this! Thanks for sharing :)
I am glad you felt okay to post this—being able to criticise leadership and think critically about the actions of the people we look up to is extremely important.
I personally would give Will the benefit of the doubt of his involvement in/knowledge about the specific details of the FTX scandal, but as you pointed out the fact remains that he and SBF were friends going back nearly a decade.
I also have questions about Will Macaskill’s ties with Elon Musk, his introduction of SBF to Elon Musk, his willingness to help SBF put up to 5 billion dollars towards the acquisition of Twitter alongside Musk, and the lack of engagement with the EA community about these actions. We talk a lot about being effective with our dollars and there are so many debates around how to spend even small amounts of money (eg. at EA events or on small EA projects), but it appears that helping SBF put up to 5 billion towards Twitter to buy in with a billionaire who recently advocated voting for the Republican party in the midterms didn’t require that same level of discussion/evaluation/scrutiny. (I understand that it wasn’t Will’s money and possibly SBF couldn’t have been talked into putting it towards other causes instead, but Will still made the introduction nonetheless.)
Thanks for your response. On reflection, I don’t think I said what I was trying to say very well in the paragraph you quoted, and I agree with what you’ve said.
My intent was not to suggest that Will or other FTX future fund advisors were directly involved (or that it’s reasonable to think so), but rather that there may have been things the advisors chose to ignore, such as Kerry’s mention of Sam’s unethical behaviour in the past. Thus, we might think that either Sam was incredibly charismatic and good at hiding things, or we might think there actually were some warning signs and those involved with him showed poor judgement of his character (or maybe some mix of both).
I’m not sure I agree with this. I agree that compassion is a good default, but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis, which will include many people in the ‘Dank EA Memes’ Facebook group. Humour can be a coping mechanism which will make some people feel better about bad situations:
“As predicted, individuals with a high sense of humor cognitively appraised less stress in the previous month than individuals with a low sense of humor and reported less current anxiety despite experiencing a similar number of everyday problems in the previous two months as those with a low sense of humor. These results support the view that humor positively affects the appraisal of stressful events and attenuates the negative affective response, and related to humor producing a cognitive affective shift and reduction in physiological arousal (Kuiper et al. 1993; Kuiper et al. 1995; Martin et al. 1993).
Maybe there is a way to use humour in a way that feels kinder, but I’ve personally yet to see anything since the FTX crisis started that could be defined as “compassionate” but also that made me laugh as much as those memes did.
Strong upvote. It’s definitely more than “just putting two people in touch.” Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF’s career trajectory—first introducing him to the principles of effective altruism, then motivating SBF to ‘earn to give.’ I imagine many of their conversations have centred around making effective career/donation/spending decisions.
It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the introduction between him (SBF) and Elon Musk. At the time, it seemed like SBF wanted to give most of his money to effective charities/longtermist causes, so it could be argued that, by using up to 15 billion to buy Twitter, that money would be money that otherwise would have gone to effective charities/longtermist causes. Given the controversy surrounding the Twitter deal, Elon Musk, and the intersection with politics, it also strikes me as a pretty big decision for SBF to be involved with. Musk had publicly talked about, among other things, letting Donald Trump back on Twitter and being a ‘free speech absolutist.’ These are values that I, as a self-identified EA, don’t share, and I would be extremely concerned if (in a world where the FTX scandal didn’t happen), the biggest funder to the EA movement had become involved in the shitshow that has been Twitter since Musk acquired it. (It seems like the only reason this didn’t happen was because SBF set off Elon Musk’s “bullshit meter,” but I digress.)
It’s hard to say how big of a role Will played here—it’s possible that SBF had narrowed in on buying Twitter and couldn’t be convinced to spend the money on other things (eg. effective charities), or that Will thought buying Twitter was actually a good use of money and was thus happy to make the introduction, or maybe that he didn’t view his role here as a big deal (maybe SBF could have asked someone else to introduce him to Musk if Will declined). Will hasn’t commented on this so we don’t know. The only reason the text messages between Will and Elon Musk became public was because Twitter filed a lawsuit against Musk.
As the commenter above said, I would consider disavowing the community if leaders start to get involved in big, potentially world-changing decisions/incredibly controversial projects with little consultation with the wider community.
Great post! I agree with a commenter above who says that “The problem is not a lack of ideas that needs to be rectified by brainstorming—we have the information already. The problem seems to be that no one wants to act on this information.” That being said, I have a few thoughts:
Regarding code of conduct at events, I’m hesitant to make hard and fast rules here. I think the reality around situations such as asking people out/hitting on people, etc, is that some people are better at reading situations than others. For example, I know couples who have started dating after meeting each others at my local EA group’s events, and I don’t think anyone would see an issue with that. The issue comes in when someone asks someone out/hits on someone and makes the other person uncomfortable in the process. That being said, not asking people out during 1:1s seems like a good norm (I’m surprised I even need to say this, to be frank), as does not touching someone unless you have explicitly asked for their consent to do so (this can apply even to something like hugs), and not making comments on someone’s appearance/facial features/body.
In terms of power structures/conflicts of interest, I would love to see us borrow more from other organisations that have good guidelines around this. I can’t think of any specific ones right now, but I know from my time working in government that there are specific processes to be followed around conflicts of interest, including consensual workplace relationships. I’m sure others can chime in with organisations that do this well.
In terms of hiring, I like what Rethink Priorities is doing. They attempt to anonymise parts of applications where possible, and ask people not to submit photos alongside their CVs. I think more could be done to encourage partially blind hiring/funding processes. For example, an employer/funder could write their first impression of someone’s application without seeing any identifying information (eg. name, age, gender, ethnicity, etc), then do a second impression after. I’m conscious that names are quite important in EA and that this could add more work to already busy grant-making organisations, but maybe there is a way to do this that would minimise additional work while also helping reduce unconscious bias.
I would also love to see more writing/information/opinions come from the top-down. For example, people who have a big voice in effective altruism could write about this more often and make suggestions for what organisations and local groups can do. We already see this a bit from CEA, but it would be great to see it from other EA orgs and thought leaders. Sometimes I get a sense that people who are higher-up in the movement don’t care about this that much, and I would love to be proven wrong.
Lastly, when it comes to engaging with posts on the forum about this topic, I was disappointed to recently see a post of someone writing about their experiences in the EA NYC community be met with a lot of people who commented disagreeing with how the post was written/how polyamorous men were generally characterised in the post. I think we should establish a norm around validating people when they have bad experiences, pointing them to the community health team, and taking steps to do better. There is no “perfect victim”—we need to acknowledge that sometimes people will have bad experiences with the community and will also hold opinions we disagree with. When they bring up their bad experience, it’s not the time to say, “not all men are like this” or “I disagree with how you went about bringing this up.”
I’m conflicted here. I completely agree with you that shitting on others’ morally-neutral choices is not ideal, but I don’t think anyone was coming away from reading that post thinking that polyamory = bad. I would hope that the people on this forum can engage thoughtfully with the post and decide for themselves what they agree/disagree with.
If someone had a bad experience with a man, and in the process of talking about it said something like, “all men suck and are immoral,” I just don’t think that is the right time or place to get into an argument with them about how they are wrong. It may have not even been coming from a place of “I actually 100% believe this,” it may have just been something thought/written about in the heat of the moment when they are recounting their negative experiences. Again, there’s no “perfect victim” that is going to say things in a way you 100% agree with all the time, but IMO the forum to disagree with them does not need to be while they are recounting their negative experience.
In terms of people coming away from the post thinking that polyamory = bad, I guess I have faith in people’s ability on this forum to separate a bad experience with a community from an entire community as a whole. (Maybe not everyone holds this same faith.)
The post was written by one person, and it was their experience, but I expect by now most EAs have run into polyamorous people in their lives (especially considering that EAs on average tend to be young, male, non-religious, privileged, and more likely to attend elite universities where polyamory/discussions about polyamory might be more common) and those experiences speak for themselves. For example, I personally have met lots of polyamorous people in my life, and I’ve seen everything from perfectly healthy, well-functioning relationships to completely toxic relationships (just like monogamous relationships). So when I engaged with the post, I was thinking, “this person had a bad experience with the poly community, and it sounds terrible. I know from my own experiences that polyamory relationships can be healthy, but unfortunately that’s not what this person experienced.”
I’m persuaded by your analogy to race, and overall I don’t want the EA community to perpetuate harmful stereotypes about any group, including polyamorous people. I think my main conflict here is I also want a world where women feel okay talking about their experiences without holding the added worry that they might not word things in exactly the right way, or that some people might push back against them when they open up (and I think you would probably agree with this).
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
Yeah, there could be some public stuff about this and I’m just not aware of it. And sorry, I wasn’t trying to say that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it “makes sense.” I’m not saying that I don’t see potential value in purchasing Twitter, but I don’t think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We’re a movement that prides itself on using evidence and reason to make the world better, after all.)
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
Thanks for clarifying that—that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.
I can see where you’re coming from with this, and I think purely financially you’re right, it doesn’t make sense to think of it as billions of dollars ‘down the drain.’
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal)
Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare—this is quite a shift from those into a space that is more about power and politics, particularly given Musk’s stated political views/aims leading up to this purchase)
How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
What were SBF’s personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA’s vision for the platform?
What trade offs would be made that would impact other cause areas?
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
I agree that it’s possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka’s comment which said:
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that’s the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn’t feel comfortable with doing either of those.
As with any social movement, people disagree about the best ways to take action. There are many critiques of EA which you should read to get a better idea of where others are coming from, for example, this post about effective altruism being an ideology, this post about someone leaving EA, this post about EA being inaccessible, or this post about blindspots in EA/rationalism communities.
Even before SBF, many people had legitimate issues with EA from a variety of standpoints. Some people find the culture unwelcoming (eg. too elitist/not enough diversity), some people take issue with longtermism (eg. too much uncertainty), others disagree with consequentialism/utilitarianism, and still others are generally on board but find more specific issues in the way that EA approaches things.
Post-SBF it’s difficult to say what the full effects will be, but I think it’s fair to say that SBF represents what many people fear/dislike about EA (eg. elitism, inexperience, ends-justifies-the-means reasoning, tech-bro vibes, etc). I’m not saying these things are necessarily true, but most people won’t spend hundreds of hours engaging with EA to find out for themselves. Instead, they’ll read an article on the New York Times about how SBF committed fraud and is heavily linked to EA and walk away with a somewhat negative impression. That isn’t always fair, but it also happens to other social movements like feminism, Black Lives Matter, veganism, environmentalism, etc. EA is no exception, and FTX/SBF was a big enough deal that a lot of people will choose not to engage with EA going forward.
Should you care? I think to an extent, yes—you should engage with criticisms, think through your own perspective, decide where you agree/disagree, and work on improving things where you think they should be improved going forward. We should all do this. Ignoring criticisms is akin to putting your fingers in your ears and refusing to listen, which isn’t a particularly rational approach. Many critics of EA will have meaningful things to say about it and if we truly want to figure out the best ways to improve the world, we need to be willing to change (see: scout mindset). That being said, not all criticisms will be useful or meaningful, and we shouldn’t get so caught up in the criticism that we stop standing for something.
- 30 Dec 2022 9:38 UTC; 20 points) 's comment on What the heck happened to EA’s public image? by (
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can’t know everything about every field, and it’s not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it’s helpful to keep in mind that, for the most part, there isn’t a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.
People choose whom they date and befriend—no-one is forcing EAs to date each other, live together, or be friends. EAs associate socially because they share values and character traits.
To an extent, but this doesn’t engage with the second counterpoint you mentioned:
2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities.
I think it would be more accurate to say that, there are subtle pressures that do heavily encourage EAs to date each other, live together, and be friends (I removed the word ‘force’ because ‘force’ feels a bit strong here). For example, as you mentioned, people working/wanting to work in AI safety are aware that moving to the Bay Area will open up opportunities. Some of these opportunities are quite likely to come from living in an EA house, socialising with other EAs, and, in some cases, dating other EAs. For many people in the community, this creates ‘invisible glass ceilings,’ as Sonia Joseph put it. For example, a woman is likely to be more put-off by the prospect of living in an EA house with 9 men than another man would be (and for good reasons, as we saw in the Times article). It is not necessarily the case that everyone’s preference is living in an EA house, but that some people feel they will miss opportunities if they don’t. Likewise, this creates barriers for people who, for religious/cultural reasons, can’t or don’t want to have roommates who aren’t the same gender, people who struggle with social anxiety/sensory overload, or people who just don’t want to share a big house with people that they also work and socialise with.
If you’re going to talk about the benefits of these practices, you also need to engage with the downfalls that affect people who, for whatever reason, choose not to become a part of the tight-knit community. I think this will disproportionately be people who don’t look like the existing community.
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community.
Who says we can’t have both? I don’t get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
I don’t think this is the case—I, for one, am definitely not thinking that anyone who chose not to sign didn’t do so because they are not opposed to bigotry. (Confusing double-negative—but basically, I can think of other reasons why people might not have wanted to sign this.)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing.
I can think of better outcomes than that—the next time there is a document or initiative with a bit more substance, here’s a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here’s a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here’s some community builders they can talk to for their stance on this. There’s nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don’t get the sense from this post that this will be the last we hear about this.
Thanks for posting this—it was an interesting and thoughtful read for me as a community builder.
This summarised some thoughts I’ve had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:
are really good at working on detailed problems with defined parameters (eg. software developers)
are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)
These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go do a PhD in biorisk,” or “There’s a job open at DeepMind, you should apply for it,” and the person will probably go for it.
On the other hand, there are a whole range of people who don’t have the above traits, and instead have one (or more) of the following traits:
prefer broader, messier problems (eg. policy analysts) and are not great at working on detailed problems within defined parameters (or maybe less interested in these types of problems)
are somewhat open to hearing things that challenge or further their existing knowledge, but might not continue to engage if they initially find something off-putting
can be persuaded to accept new arguments, but are more likely to push back, hold onto scepticism for longer, and won’t accept something simply because it is the commonly held view, even if the arguments for it are generally good
These people are harder for community builders to deal with as there is not a clear forged pathway within EA, and they might also be less convinced by the pathways that do exist. (For example, maybe if someone has these traits a community builder might push them towards working in AI policy, but they might not be as convinced that working in AI policy is important, or that they personally can make a big difference in the field, and they won’t be as easily persuaded to apply for jobs in AI policy.) These people might also feel a bit lost when EAs try to push them towards high-impact work—they see the world in greyer terms, they carry more uncertainty, and they are more hesitant to go “all in” on a specified career path.
I think there is a great deal of value that can be derived if EA can find ways to engage with people with these traits, and I also think people with at least one of these traits are probably more likely to fall into the categories that you highlighted in your post – government/policy experts, managers, cause prioritizers (can’t think of a better title here), entrepreneurs, and people with high social/emotional skills. These are people who like big, messy, broad problems and who may generally take more time to accept new ideas and arguments.
In my community-building role, I want to attract and keep more of these people! I don’t have good answers for how to do this (yet), but I think being aware of the issue and trying to figure out some possible ways in which more people with these skills can be brought on board (as well as trying to figure out why EA might be off-putting to some of these people) is a great start.