I’m an experienced policy advisor currently living in New Zealand.
S.E. Montgomery
I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you’ve outlined.
I don’t like the idea of bringing people into EA based on false premises. It feels weird to me to ‘hide’ parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, “EA is a movement concerned with finding the most impactful careers and charitable interventions,” to many people I think this automatically comes across as concerning issues of global health and poverty. ‘Altruism’ is in the name after all. I don’t think many people associated the word ‘altruism’ with charities aimed at ensuring that artificial intelligence is safe.
If I forefront concerns about AI and say, “EA is a movement aimed at finding the most impactful interventions… and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe,” that also feels like it’s not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues.
The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously.
You might say that these are really simplistic ways of talking about EA, and there’s a lot more than I could say than a couple simple sentences. That’s true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I’ve run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I’ve interacted with friends or family members who don’t know what EA is, most of the time I get the sense that they don’t want a whole spiel.
I also think it is not necessarily a ‘persuasion game’ to think about how to bring more people on board with an idea—it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as ‘directly stating their actual beliefs.’ Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, “I believe there’s a decent chance AI could kill us all,” or I could say, “I believe that we aren’t taking the risks of AI seriously enough.” Both of these are communicating a similar idea, but will be taken quite differently.
Thanks for posting this! I agree, and one thing I’ve noticed while community building is that it’s very easy to give career direction to students and very early-career professionals, but much more challenging to mid/late-career professionals. Early-career people seem more willing to experiment/try out a project that doesn’t have great support systems, whereas mid/late-career people have much more specific ideas about what they want out of a job.
Entrepreneurship is not for everyone, and being advised to start your own project with unclear parameters and outcomes often has low appeal to people who have been working for 10+ years in professions with meaningful structure, support, and reliable pay. (It often has low appeal to students/early-career professionals too, but younger people seem more willing to try.) I would love to see EA orgs implement some of the suggestions you mentioned.
We already have tons of implicit norms that ask different behaviours of men and women, and these norms are the reason why it’s women coming forward to say they feel uncomfortable rather than men. There are significant differences in how men and women approach dating in professional contexts, see power dynamics, and in the ratio of men in powerful positions versus women (as well as the gender ratio in EA generally). Drawing attention to these differences and discussing new norms that ask for different behaviours of men in these contexts (and different behaviours from the institutions/systems that these men interact with) is necessary to prevent these situations from happening in the future.
Something about this comment rubbed me the wrong way. EA is not meant to be a dating service, and while there are many people in the community who are open to the idea of dating someone within EA or actively searching for this, there are also many people who joined for entirely different reasons and don’t consider this a priority/don’t want this.
I think that viewing the relationship between men and women in EA this way—eg. men competing for attention, where lonely and desperate men will do what it takes to to get with women—does a disservice to both genders. It sounds like a) an uncomfortable environment for women to join, because they don’t want to be swarmed by a bunch of desperate men, and b) an uncomfortable environment for men, because to some extent it seems to justify men doing more and more to get the attention of women, often at the cost of women being made to feel uncomfortable. (And many men in EA do not want women to feel uncomfortable!)
Let’s zoom out a bit. To me, it’s not that important that everyone in EA gets a match. I find the gender imbalance concerning for lots of reasons, but ‘a lack of women for men to match with’ is not on my list of concerns. Even if there was a perfect 50⁄50 balance of men and women, I think there would still be lonely men willing to abuse their power. (Like you said, many women come into the movement already in relationships, some men/women do not want to date within the movement, and some people are unfortunately just not people others want to date.) So the problem is not the lack of women, but rather the fact that men in powerful positions are either blind to their own power, or can see their power and are willing to abuse that power, and there are not sufficient systems in place to prevent this from happening, or even to stop it once it has happened.
- 23 Feb 2023 12:32 UTC; 4 points) 's comment on A statement and an apology by (
I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community.
Who says we can’t have both? I don’t get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
I don’t think this is the case—I, for one, am definitely not thinking that anyone who chose not to sign didn’t do so because they are not opposed to bigotry. (Confusing double-negative—but basically, I can think of other reasons why people might not have wanted to sign this.)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing.
I can think of better outcomes than that—the next time there is a document or initiative with a bit more substance, here’s a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here’s a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here’s some community builders they can talk to for their stance on this. There’s nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don’t get the sense from this post that this will be the last we hear about this.
People choose whom they date and befriend—no-one is forcing EAs to date each other, live together, or be friends. EAs associate socially because they share values and character traits.
To an extent, but this doesn’t engage with the second counterpoint you mentioned:
2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities.
I think it would be more accurate to say that, there are subtle pressures that do heavily encourage EAs to date each other, live together, and be friends (I removed the word ‘force’ because ‘force’ feels a bit strong here). For example, as you mentioned, people working/wanting to work in AI safety are aware that moving to the Bay Area will open up opportunities. Some of these opportunities are quite likely to come from living in an EA house, socialising with other EAs, and, in some cases, dating other EAs. For many people in the community, this creates ‘invisible glass ceilings,’ as Sonia Joseph put it. For example, a woman is likely to be more put-off by the prospect of living in an EA house with 9 men than another man would be (and for good reasons, as we saw in the Times article). It is not necessarily the case that everyone’s preference is living in an EA house, but that some people feel they will miss opportunities if they don’t. Likewise, this creates barriers for people who, for religious/cultural reasons, can’t or don’t want to have roommates who aren’t the same gender, people who struggle with social anxiety/sensory overload, or people who just don’t want to share a big house with people that they also work and socialise with.
If you’re going to talk about the benefits of these practices, you also need to engage with the downfalls that affect people who, for whatever reason, choose not to become a part of the tight-knit community. I think this will disproportionately be people who don’t look like the existing community.
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can’t know everything about every field, and it’s not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it’s helpful to keep in mind that, for the most part, there isn’t a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.
As with any social movement, people disagree about the best ways to take action. There are many critiques of EA which you should read to get a better idea of where others are coming from, for example, this post about effective altruism being an ideology, this post about someone leaving EA, this post about EA being inaccessible, or this post about blindspots in EA/rationalism communities.
Even before SBF, many people had legitimate issues with EA from a variety of standpoints. Some people find the culture unwelcoming (eg. too elitist/not enough diversity), some people take issue with longtermism (eg. too much uncertainty), others disagree with consequentialism/utilitarianism, and still others are generally on board but find more specific issues in the way that EA approaches things.
Post-SBF it’s difficult to say what the full effects will be, but I think it’s fair to say that SBF represents what many people fear/dislike about EA (eg. elitism, inexperience, ends-justifies-the-means reasoning, tech-bro vibes, etc). I’m not saying these things are necessarily true, but most people won’t spend hundreds of hours engaging with EA to find out for themselves. Instead, they’ll read an article on the New York Times about how SBF committed fraud and is heavily linked to EA and walk away with a somewhat negative impression. That isn’t always fair, but it also happens to other social movements like feminism, Black Lives Matter, veganism, environmentalism, etc. EA is no exception, and FTX/SBF was a big enough deal that a lot of people will choose not to engage with EA going forward.
Should you care? I think to an extent, yes—you should engage with criticisms, think through your own perspective, decide where you agree/disagree, and work on improving things where you think they should be improved going forward. We should all do this. Ignoring criticisms is akin to putting your fingers in your ears and refusing to listen, which isn’t a particularly rational approach. Many critics of EA will have meaningful things to say about it and if we truly want to figure out the best ways to improve the world, we need to be willing to change (see: scout mindset). That being said, not all criticisms will be useful or meaningful, and we shouldn’t get so caught up in the criticism that we stop standing for something.
- 30 Dec 2022 9:38 UTC; 20 points) 's comment on What the heck happened to EA’s public image? by (
Thinking that ‘the ends justifies the means’ (in this case, making more donations justifies tax evasion) is likely to lead to incorrect calculations about the trade-offs involved. It’s very easy to justify almost anything with this type of logic, which means we should be very hesitant.
As another commenter pointed out, tax money isn’t ‘your’ money. Tax evasion (as opposed to ‘tax avoidance’ - which is legal) is stealing from the government. It would not be ethical to steal from your neighbour in order to donate the money, and likewise it is not ethical to steal from the government to donate money.
I mostly agree with your post from a purely financial perspective, I was just giving some examples where people might think that the potential financial benefits of buying a house are worth the potential risks you mentioned. I’ve got a friend who falls into the example you gave (doesn’t have/plan to have children, will leave his house to charity in his will), and this doesn’t seem like that terrible of a decision for him.
EAs who will/may have children however perhaps shouldn’t buy a home as, if they do, the pressure to leave the home to their children will be very great. There’s certainly no guarantee the child will use the asset to do good.
To engage with what you said a bit more, I find this paragraph difficult to reconcile with the idea that it seems much easier to have children when you own a house. Without that, your family will likely have to move around a lot more, your children may be forced to switch schools, you might have to deal with longer commute times when moving between places, you may deal with periods of insecurity (eg. if you live in a popular city it may be difficult to find a suitable rental at some points), your financial situation could be less stable (eg. you might have to accept significantly higher rents to stay in the same house/neighbourhood, or move to a different house/neighbourhood and deal with the previous considerations), etc. All of these could impact your financial situation but also your day-to-day wellbeing/the amount of security you feel. And if a person’s living situation is more stressful I could also see this impacting their performance at work and possibly their lifetime earnings. So this doesn’t seem clear-cut to me, even though I do agree there will be more pressure to pass the house along to your children as opposed to selling it and donating the proceeds.
(Disclaimer: Some countries guarantee more rights for renters that would negate some of the above concerns (eg. Germany), but if you live in the US/UK/Canada/Australia/etc, I would expect the above concerns to apply.)
Not saying these situations apply to the person you were replying to, but I can think of a few instances where this would be the case.
You buy a house that you think will substantially increase in value—This is never guaranteed, but there could be good reasons to think that the value of a house will increase over time, eg. the sale price seems low for what it is, you are planning to do extensive renovations, it’s in an up-and-coming area, etc.
You are bad at saving money—Buying a house forces you to put money towards an asset, whereas renting + trying to save may be more difficult for some people and they will end up spending more money than if they had a ‘locked in’ mortgage payment to make.
Buying a house would be cheaper than paying rent—In some areas, owning is cheaper month to month than renting (even when accounting for all the extra expenses that homeowners have to pay).
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don’t imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
I agree that it’s possible SBF just wanted to invest in Twitter in a non-EA capacity. My comment was a response to Habryka’s comment which said:
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
If SBF did just want to invest in Twitter (as an investor/as a billionaire/as someone who is interested in global politics, and not from an EA perspective) and asked Will for help, that is a different story. If that’s the case, Will could still have refused to introduce SBF to Elon, or pushed back against SBF wanting to buy Twitter in a friend/advisor capacity (SBF has clearly been heavily influenced by Will before), but maybe he didn’t feel comfortable with doing either of those.
I can see where you’re coming from with this, and I think purely financially you’re right, it doesn’t make sense to think of it as billions of dollars ‘down the drain.’
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
Does the EA movement want to be further associated with Elon Musk than we already are, including any changes he might want to make with Twitter? What are the risks involved? (based on what we knew before the Twitter deal)
Does the EA movement want to be in the business of purchasing social media platforms? (In the past, we have championed causes like global health and poverty, reducing existencial risks, and animal welfare—this is quite a shift from those into a space that is more about power and politics, particularly given Musk’s stated political views/aims leading up to this purchase)
How might the EA movement shift because of this? (Some EAs may be on board, others may see it as quite surprising and not in line with their values.)
What were SBF’s personal/business motivations for wanting to acquire Twitter, and how would those intersect with EA’s vision for the platform?
What trade offs would be made that would impact other cause areas?
My guess is there must be some public stuff about this, though it wouldn’t surprise me if no one had made a coherent writeup of it on the internet (I also strongly reject the frame that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I have all kinds of crazy schemes for stuff that I think in-expectation beats GiveWell’s last dollar, and I haven’t written up anything close to a quarter of them, and likely never will).
Yeah, there could be some public stuff about this and I’m just not aware of it. And sorry, I wasn’t trying to say that people are only allowed to say that something ‘makes sense’ after having discussed the merits of it publicly. I was more trying to say that I would find it concerning for major spending decisions (billions of dollars in this case) to be made without any community consultation, only for people to justify it afterwards because at face value it “makes sense.” I’m not saying that I don’t see potential value in purchasing Twitter, but I don’t think a huge decision like that should be justified based on quick, post-hoc judgements. If SBF wanted to buy Twitter for non-EA reasons, that’s one thing, but if the idea here is that purchasingTwitter alongside Elon Musk is actually worth billions of dollars from an EA perspective, I would need to see way more analysis, much like significant analysis has been done for AI safety, biorisk, animal welfare, and global health and poverty. (We’re a movement that prides itself on using evidence and reason to make the world better, after all.)
Oh, to be clear, I think Will fucked up pretty badly here. I just don’t think any policy that tries to prevent even very influential and trusted people in EA talking to other people in private about their honest judgement of other people is possibly a good idea. I think you should totally see this as a mistake and update downwards on Will (as well as EAs willingness to have him be as close as possible to a leader as we have), but I think from an institutional perspective there is little that should have been done at this point (i.e. all the mistakes were made much earlier, in how Will ended up in a bad epistemic state, and maybe the way we delegate leadership in the first place).
Thanks for clarifying that—that makes more sense to me, and I agree that there was little that should have been done at that specific point. The lead-up to getting to that point is much more important.
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
The 15 billion figure comes from Will’s text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, “Does he have huge amounts of money?” and Will replies, “Depends on how you define “huge.” He’s worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: “~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing”
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the time, in theory, would have been counterfactually spent on longtermist causes). Do you really believe this? I’ve never seen ‘buying up social media companies’ as a cause area brought up on the EA forum, at EA events, in EA-related books, podcasts, or heard any of the leaders talk about it. I find it concerning that some of us are willing to say “this makes sense” without, to my knowledge, ever having discussed the merits of it.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
I don’t agree with this framing. This wasn’t just a private individual talking to another private individual. It was Will Macaskill (whose words, beliefs, and actions are heavily tied to the EA community as a whole) trying to connect SBF (at the time one of the largest funders in EA) and Elon Musk to go in on buying Twitter together, which could have had pretty large implications for the EA community as a whole. Of course it’s his right to have private conversations with others and he doesn’t have to consult anyone on the decisions he makes, but the framing here is dismissive of this being a big deal when, as another user points out, it could have easily been the most consequential thing EAs have ever done. I’m not saying Will needs to make perfect decisions, but I want to push back against this idea of him operating in just a private capacity here.
In terms of people coming away from the post thinking that polyamory = bad, I guess I have faith in people’s ability on this forum to separate a bad experience with a community from an entire community as a whole. (Maybe not everyone holds this same faith.)
The post was written by one person, and it was their experience, but I expect by now most EAs have run into polyamorous people in their lives (especially considering that EAs on average tend to be young, male, non-religious, privileged, and more likely to attend elite universities where polyamory/discussions about polyamory might be more common) and those experiences speak for themselves. For example, I personally have met lots of polyamorous people in my life, and I’ve seen everything from perfectly healthy, well-functioning relationships to completely toxic relationships (just like monogamous relationships). So when I engaged with the post, I was thinking, “this person had a bad experience with the poly community, and it sounds terrible. I know from my own experiences that polyamory relationships can be healthy, but unfortunately that’s not what this person experienced.”
I’m persuaded by your analogy to race, and overall I don’t want the EA community to perpetuate harmful stereotypes about any group, including polyamorous people. I think my main conflict here is I also want a world where women feel okay talking about their experiences without holding the added worry that they might not word things in exactly the right way, or that some people might push back against them when they open up (and I think you would probably agree with this).
I’m conflicted here. I completely agree with you that shitting on others’ morally-neutral choices is not ideal, but I don’t think anyone was coming away from reading that post thinking that polyamory = bad. I would hope that the people on this forum can engage thoughtfully with the post and decide for themselves what they agree/disagree with.
If someone had a bad experience with a man, and in the process of talking about it said something like, “all men suck and are immoral,” I just don’t think that is the right time or place to get into an argument with them about how they are wrong. It may have not even been coming from a place of “I actually 100% believe this,” it may have just been something thought/written about in the heat of the moment when they are recounting their negative experiences. Again, there’s no “perfect victim” that is going to say things in a way you 100% agree with all the time, but IMO the forum to disagree with them does not need to be while they are recounting their negative experience.
Great post! I agree with a commenter above who says that “The problem is not a lack of ideas that needs to be rectified by brainstorming—we have the information already. The problem seems to be that no one wants to act on this information.” That being said, I have a few thoughts:
Regarding code of conduct at events, I’m hesitant to make hard and fast rules here. I think the reality around situations such as asking people out/hitting on people, etc, is that some people are better at reading situations than others. For example, I know couples who have started dating after meeting each others at my local EA group’s events, and I don’t think anyone would see an issue with that. The issue comes in when someone asks someone out/hits on someone and makes the other person uncomfortable in the process. That being said, not asking people out during 1:1s seems like a good norm (I’m surprised I even need to say this, to be frank), as does not touching someone unless you have explicitly asked for their consent to do so (this can apply even to something like hugs), and not making comments on someone’s appearance/facial features/body.
In terms of power structures/conflicts of interest, I would love to see us borrow more from other organisations that have good guidelines around this. I can’t think of any specific ones right now, but I know from my time working in government that there are specific processes to be followed around conflicts of interest, including consensual workplace relationships. I’m sure others can chime in with organisations that do this well.
In terms of hiring, I like what Rethink Priorities is doing. They attempt to anonymise parts of applications where possible, and ask people not to submit photos alongside their CVs. I think more could be done to encourage partially blind hiring/funding processes. For example, an employer/funder could write their first impression of someone’s application without seeing any identifying information (eg. name, age, gender, ethnicity, etc), then do a second impression after. I’m conscious that names are quite important in EA and that this could add more work to already busy grant-making organisations, but maybe there is a way to do this that would minimise additional work while also helping reduce unconscious bias.
I would also love to see more writing/information/opinions come from the top-down. For example, people who have a big voice in effective altruism could write about this more often and make suggestions for what organisations and local groups can do. We already see this a bit from CEA, but it would be great to see it from other EA orgs and thought leaders. Sometimes I get a sense that people who are higher-up in the movement don’t care about this that much, and I would love to be proven wrong.
Lastly, when it comes to engaging with posts on the forum about this topic, I was disappointed to recently see a post of someone writing about their experiences in the EA NYC community be met with a lot of people who commented disagreeing with how the post was written/how polyamorous men were generally characterised in the post. I think we should establish a norm around validating people when they have bad experiences, pointing them to the community health team, and taking steps to do better. There is no “perfect victim”—we need to acknowledge that sometimes people will have bad experiences with the community and will also hold opinions we disagree with. When they bring up their bad experience, it’s not the time to say, “not all men are like this” or “I disagree with how you went about bringing this up.”
Strong upvote. It’s definitely more than “just putting two people in touch.” Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF’s career trajectory—first introducing him to the principles of effective altruism, then motivating SBF to ‘earn to give.’ I imagine many of their conversations have centred around making effective career/donation/spending decisions.
It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the introduction between him (SBF) and Elon Musk. At the time, it seemed like SBF wanted to give most of his money to effective charities/longtermist causes, so it could be argued that, by using up to 15 billion to buy Twitter, that money would be money that otherwise would have gone to effective charities/longtermist causes. Given the controversy surrounding the Twitter deal, Elon Musk, and the intersection with politics, it also strikes me as a pretty big decision for SBF to be involved with. Musk had publicly talked about, among other things, letting Donald Trump back on Twitter and being a ‘free speech absolutist.’ These are values that I, as a self-identified EA, don’t share, and I would be extremely concerned if (in a world where the FTX scandal didn’t happen), the biggest funder to the EA movement had become involved in the shitshow that has been Twitter since Musk acquired it. (It seems like the only reason this didn’t happen was because SBF set off Elon Musk’s “bullshit meter,” but I digress.)
It’s hard to say how big of a role Will played here—it’s possible that SBF had narrowed in on buying Twitter and couldn’t be convinced to spend the money on other things (eg. effective charities), or that Will thought buying Twitter was actually a good use of money and was thus happy to make the introduction, or maybe that he didn’t view his role here as a big deal (maybe SBF could have asked someone else to introduce him to Musk if Will declined). Will hasn’t commented on this so we don’t know. The only reason the text messages between Will and Elon Musk became public was because Twitter filed a lawsuit against Musk.
As the commenter above said, I would consider disavowing the community if leaders start to get involved in big, potentially world-changing decisions/incredibly controversial projects with little consultation with the wider community.
Do you have evidence for this? Because there is lots of evidence to the contrary—suggesting that job insecurity negatively impacts people’s productivity as well as their physical, and mental health.[1][2][3].
This goes both ways—yes, there is a chance to fund other potential better upstarts, but by only offering short-term grants, funders also miss out on applicants who want/need more security (eg. competitive candidates who prefer more secure options, parents, people supporting family members, people with big mortgages, etc).
I think there are options here that would help both funders and individuals. For example, longer grants could be given with a condition that either party can give a certain amount of notice to end the agreement (typical in many US jobs), and many funders could re-structure to allow for longer grants/a different structure for grants if they wanted to. As long as these changes were well-communicated with donors, I don’t see why we would be stuck to a 1-year cycle.
My experience: As someone who has been funded by grants in the past, job security was a huge reason for me transitioning away from this. It’s also a complaint I’ve heard frequently from other grantees, and something that not everyone can even afford to do in the first place. I’m not implying that donors need to hire people or keep them on indefinitely, but even providing grants for 2 or more years at a time would be a huge improvement to the 1-year status quo.