As technicalities noted, it’s easy to see the merits of these arguments in general, but harder to see who should actually do things, and what they should do.
To summarize the below:
EA orgs already look at a wide range of causes, and the org with most of the money looks at perhaps the widest range of causes
Our community is small and well-connected; new causes can get attention and support pretty easily if someone presents a good argument, and there’s a strong historical precedent for this
People should be welcoming and curious to people from many different backgrounds, and attempts to do more impactful work should be celebrated for many kinds of work
If this isn’t the case now, people should be better about this
If you have suggestions for what specific orgs or funders should do, I’m interested in hearing them!
My comments often look like: “When you say that ‘EA should do X’, which people and organizations in EA are you referring to?”
Open Philanthropy does more funding and research than anyone, and they work in a broad range of areas. Maybe the concrete argument here is that they should develop more shallow investigations into medium-depth investigations?
Rethink Priorities probably does the second-most research among EA orgs, and they also look at a lot of different topics.
Founders Pledge is probably top-five among orgs, and… again, lots of variety.
Past those two organizations, most research orgs in EA have pretty specific areas of focus. Animal Charity Evaluators looks at animal charities. GiveWell looks at global health and development interventions with strong RCT support. If you point ACE at a promising new animal charity to fund, or GiveWell at a new paper showing a cool approach to improving health in the developing world, they’d probably be interested! But they’re not likely to move into causes outside their focus areas, which seems reasonable.
After all of this, which organizations are left that actually have “too narrow” a focus? 80,000 Hours? The Future of Humanity Institute?
A possible argument here is that some new org should exist to look for totally new causes; on the other hand, Open Philanthropy already does a lot of this, and if they were willing to fund other people to do more of it, I assume they’d rather hire those people — and they have, in fact, been rapidly expanding their research team.
*****
On your example of cancer: Open Philanthropy gave a $6.5 million grant to cancer research in 2017, lists cancer as one of the areas they support on their “Human Health and Wellbeing” page, and notes it as a plausible focus area in a 2014 report. I’m guessing they’ve looked at other cancer research projects and found them somewhat less promising than their funding bar.
Aside from Open Phil, I don’t know which people or entities in EA are well-positioned to focus on cancer. It seems like someone would have to encourage existing bio-interested people to focus on cancer instead of biosecurity or neglected tropical diseases, which doesn’t seem obviously good.
In the case of a cancer researcher looking for funding from an EA organization, there just aren’t many people who have the necessary qualifications to judge their work, because EA is a tiny movement with a lot of young people and few experienced biologists.
The best way for someone who isn’t a very wealthy donor to change this would probably be to write a compelling case for cancer research on the Forum; lots of people read this website, including people with money to spend. Same goes for other causes someone thinks are neglected.
This path has helped organizations like ALLFED and the Happier Lives Institute get more attention for their novel research agendas, and posts with the “less-discussed causes” tag do pretty well here.
As far as I can tell, we’re bottlenecked on convincing arguments that other areas and interventions are worth funding, rather than willingness to consider or fund new areas and interventions for which convincing arguments exist.
*****
Fortunately, there’s good historical precedent here: EA is roughly 12 years old, and has a track record of integrating new ideas at a rapid pace. Here’s my rough timeline (I’d welcome corrections on this):
2007: GiveWell is founded
2009: Giving What We Can is founded, launching the “EA movement” (though the term “effective altruism” didn’t exist yet). The initial focus was overwhelmingly on global development.
2011: The Open Philanthropy Project is founded (as GiveWell Labs). Initial shallow investigations include climate change, in-country migration, and asteroid detection (conducted between 2011 and 2013).
2013: The Singularity Institute for Artificial Intelligence becomes MIRI
2014: The first EA Survey is run. The most popular orgs people mention as donation targets are (in order) AMF, SCI, GiveDirectly, MIRI, GiveWell, CFAR, Deworm the World, Vegan Outreach, the Humane League, and 80,000 Hours.
To be fair, the numbers look pretty similar for the 2019 survey, though they are dwarfed by donations from Open Phil and other large funders.
Depending on where you count the “starting point”, it took between 5 and 7 years to get from “effective giving should exist” to something resembling our present distribution of causes.
In the seven years since, we’ve seen:
The launch of multiple climate-focused charity recommenders (I’d argue that the Clean Air Task Force is now as well-established an “EA charity” as most of the charities GiveWell recommends)
The rise of wild animal suffering and AI governance/policy as areas of concern (adding a ton of depth and variety to existing cause areas — it hasn’t been that long since “AI” meant MIRI’s technical research and “animal advocacy” meant lobbying against factory farming when those things came up in EA)
The founding of the Good Food Institute (2016) and alternative protein becoming “a thing”
The founding of Charity Entrepreneurship and resultant founding of orgs focused on tobacco taxation, lead abatement, fish welfare, family planning, and other “unusual” causes
Open Philanthropy going from a few million dollars in annual grants to in the neighborhood of ~$200 million. Alongside “standard cause area” grants, 2021 grants include $7 million for the Centre for Pesticide Suicide Prevention, $1.5 million for Fair and Just Prosecution, and $0.6 million for Abundant Housing Massachusetts (over two years — but given that the org has a staff of one person right now, I imagine that’s a good chunk of their total funding)
Three of the ten highest-karma Forum posts of all time (1, 2, 3) discuss cause areas with little existing financial support within EA
I’d hope that all this would also generate a better social environment for people to talk about different types of work — if not, individuals need better habits.
*****
Everyone reasonably familiar with EA knows that AI safety, pandemic preparedness, animal welfare and global poverty are considered EA cause areas, whereas feminism, LGBT rights, wildlife conservation and dental hygiene aren’t.
I think that any of these causes could easily get a bunch of interest and support if someone published a single compelling Forum post arguing that putting some amount of funding into an existing organization or intervention would lead to a major increase in welfare. (Maybe not wildlife conservation, because it seems insanely hard for that to be competitive with farmed animal welfare, but I’m open to having my mind blown.)
Until that post exists (or some other resource written with EA principles in mind), there’s not much for a given person in the community to do. Though I do think that individuals should generally try to read more research outside of the EA-sphere, to get a better sense for what’s out there.
If someone is reading this and wants to try writing a compelling post about a new area, I’d be psyched to hear about it!
Or, if you aren’t sure what area to focus on, but want to embrace the challenge of opening a new conversation, I’ve got plenty of suggestions for you (starting here).
*****
However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career. What if she’s a very promising cancer researcher (as a result of her existing track record, reputation or professional inclinations) but it’s not entirely clear how she’d do in the space of clean meat? What if she feels an intense inner drive working on cancer (since her mother died of melanoma)? These considerations should factor in when she tries to estimate her expected career-long impact.
I think that very few people in this community would disagree, at least in the example you’ve put forth.
*****
From my experience, a biologist choosing to spend her career doing cancer research would often feel inferior to other EAs choosing a more EA-stereotypic career such as pandemic preparedness or clean meat. When introducing herself in front of other EAs, she may start with an apology like “What I’m working on isn’t really related to EA”.
What if we tried more actively to let people feel that whatever they want to work on is really fine, and simply tried to support and help them do it better through evidence and reason?
This is where I agree with you, in that I strongly support “letting people feel that what they want to work on is fine” and “not making people feel apologetic about what they do”.
But I’m not sure how many people actually feel this way, or whether the way people respond to them actually generates this kind of feeling. My experience is that when people tell me they work on something unusual, I try to say things like “Cool!” and “What’s that like?” and “What do you hope to accomplish with that?” and “Have you thought about writing this up on the Forum?” (I don’t always succeed, because small talk is an imperfect art, but that’s the mindset.)
I’d strongly advocate for other people in social settings also saying things like this. Maybe the most concrete suggestion from here is for EA groups, and orgs that build resources for them, to encourage this more loudly than they do now? I try to be loud, here and in the EA Newsletter, but I’m one person :-(
*****
I think that the EA community should be a big tent for people who want to do a better job of measuring and increasing their impact, no matter what they work on.
I think that EA research should generally examine a wide range of options in a shallow way, before going deeper on more promising options (Open Phil’s approach). But EA researchers should look at whatever seems interesting or promising to them, as long as they understand that getting funded to pursue research will probably require presenting strong evidence of impact/promise to a funder.
I think that EA funding should generally be allocated based on the best analysis we can do on the likely impact of different work. But EA funders should fund whatever seems interesting or promising to them, as long as they understand that they’ll probably get less impact if they fund something that few other people in the community think is a good funding target. (Value of learning is real, and props to small funders who make grants with a goal of learning more about some area.)
I think that EA advice should try to work out what the person being advised actually wants — is it “have an impactful career in dental hygiene promotion”, or “have an impactful career, full stop”? Is it “save kids from cancer”, or “save kids, full stop”?
And I think we should gently nudge people to consider the “full stop” options, because the “follow your passions wherever they go” argument seems more common in the rest of society than it ought to be. Too many people choose a cause or career based on a few random inputs (“I saw a movie about it”, “I got into this lab and not that lab”, “I needed to pay off my student loans ASAP”) without thinking about a wide range of options first.
But in the end, there’s nothing wrong with wanting to do a particular thing, and trying to have the most impact you can with the thing you do. This should be encouraged and celebrated, whether or not someone chooses to donate to it.
Thank you Aaron for taking the time to write this detailed and thoughtful comment to my post!
I’ll start with saying that I pretty much agree with everything you say, especially in your final remarks—that we should be really receptive to what people actually want and advise them accordingly, and maybe try to gently nudge them into taking a more open-minded general-impact-oriented approach (but not try to force it on them if they don’t want to).
I also totally agree that most EA orgs are doing a fantastic job at exploring diverse causes and ways to improve the world, and that the EA movement is very open-minded to accepting new causes in the presence of good evidence.
To be clear, I don’t criticize specific EA orgs. The thing I do criticize is pretty subtle, and refers more to the EA community itself—sometimes to individuals in the community, but mostly to our collective attitude and the atmospheres we create as groups.
When I say “I think we need to be more open to diverse causes”, it seems that your main answer is “present me with good evidence that a new cause is promising and I’ll support it”, which is totally fair. I think this is the right attitude for an EA to have, but it doesn’t exactly address what I allude to. I don’t ask EAs to start contributing to new unproven causes themselves, but rather that they be open to others contributing to them.
I agree with you that most EAs would not confront a cancer researcher and blame her of doing something un-EA-like (and I presume many would even be kind and approach her with curiosity about the motives for her choice). But in the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently (“Oh, that’s interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?”), those repeating comments can accumulate into a strong feeling of unease. To be clear, I’m not blaming any of the imaginary people who met the imaginary cancer researcher at the imaginary EAG conference for having done anything wrong, because each one of them tried to be kind and welcoming. It’s only their collective action that made her feel off.
I think the EA community should be more welcoming to people who want to operate in areas we don’t consider particularly promising, even if they don’t present convincing arguments for their decisions.
In the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently (“Oh, that’s interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?”), those repeating comments can accumulate into a strong feeling of unease.
I like this example! It captures something I can more easily imagine happening (regularly) in the community.
One proposal for how to avoid this collective action problem would be for people to ask the same sorts of questions, no matter what area someone works on (assuming they don’t know enough to have more detailed/specific questions).
For example, instead of:
Have you considered X?
Do you think your thing, Y, is more impactful than X?
You’d have questions like:
What led you to work on Y?
And then, if they say something about impact, “Were there any other paths you considered? How did you choose Y in the end?”
What should someone not involved in Y know about it?
What are your goals for this work? How is it going so far?
What are your goals for this event? (If it’s a major event and not e.g. a dinner party)
These should work about equally well for people in most fields, and I think that “discussing the value/promise of an area” conversations will typically go better than “discussing whether a new area ‘beats’ another area by various imperfect measures”. We still have to take the second step at some point as a community, but I’d rather leave that to funders, job-seekers, and Forum commentators.
I think the EA community should be more welcoming to people who want to operate in areas we don’t consider particularly promising, even if they don’t present convincing arguments for their decisions.
Depends on the context.
Plenty of people in the EA space are doing their own thing (disconnected from standard paths) but still provide interesting commentary, ask good questions, etc. I have no idea what some Forum users do for work, but I don’t feel the need to ask. If they’re a good fit for the culture and the community seems better for their presence, I’m happy.
The difficulty comes when certain decisions have to be made — whose work to fund, which people are likely to get a lot of benefit from EA Global, etc. At that point, you need solid evidence or a strong argument that your work is likely to have a big impact.
In casual settings, the former “vibe” seems better — but sometimes, I think that people who thrive in casual spaces get frustrated when they “hit a wall” in the latter situations (not getting into a conference, not getting a grant, etc.)
In the end, EA can’t really incorporate an area without having a good reason to do so. I’d be satisfied if we could split “social EA” from “business EA” in terms of how much evidence and justification people are asked for, but we should be transparent about the difference between enjoying the community and looking for career or charity support.
I like your suggestions for questions one could ask a stranger at an EA event!
About “social EA” vs. “business EA”, I think I’d make a slightly different distinction. If you ask for someone else’s (or some org’s) time or money, then of course you need to come up with good explanations for why the thing you are offering (whether it is your employment or some project) is worthwhile. It’s not even a unique feature of EA. But, if you are just doing your own thing and not asking for anyone’s time or money, and just want to enjoy the company of other EAs, then this is the case where I think the EA community should be more welcoming and be happy to just let you be.
As technicalities noted, it’s easy to see the merits of these arguments in general, but harder to see who should actually do things, and what they should do.
To summarize the below:
EA orgs already look at a wide range of causes, and the org with most of the money looks at perhaps the widest range of causes
Our community is small and well-connected; new causes can get attention and support pretty easily if someone presents a good argument, and there’s a strong historical precedent for this
People should be welcoming and curious to people from many different backgrounds, and attempts to do more impactful work should be celebrated for many kinds of work
If this isn’t the case now, people should be better about this
If you have suggestions for what specific orgs or funders should do, I’m interested in hearing them!
*****
To quote myself:
Open Philanthropy does more funding and research than anyone, and they work in a broad range of areas. Maybe the concrete argument here is that they should develop more shallow investigations into medium-depth investigations?
Rethink Priorities probably does the second-most research among EA orgs, and they also look at a lot of different topics.
Founders Pledge is probably top-five among orgs, and… again, lots of variety.
Past those two organizations, most research orgs in EA have pretty specific areas of focus. Animal Charity Evaluators looks at animal charities. GiveWell looks at global health and development interventions with strong RCT support. If you point ACE at a promising new animal charity to fund, or GiveWell at a new paper showing a cool approach to improving health in the developing world, they’d probably be interested! But they’re not likely to move into causes outside their focus areas, which seems reasonable.
After all of this, which organizations are left that actually have “too narrow” a focus? 80,000 Hours? The Future of Humanity Institute?
A possible argument here is that some new org should exist to look for totally new causes; on the other hand, Open Philanthropy already does a lot of this, and if they were willing to fund other people to do more of it, I assume they’d rather hire those people — and they have, in fact, been rapidly expanding their research team.
*****
On your example of cancer: Open Philanthropy gave a $6.5 million grant to cancer research in 2017, lists cancer as one of the areas they support on their “Human Health and Wellbeing” page, and notes it as a plausible focus area in a 2014 report. I’m guessing they’ve looked at other cancer research projects and found them somewhat less promising than their funding bar.
Aside from Open Phil, I don’t know which people or entities in EA are well-positioned to focus on cancer. It seems like someone would have to encourage existing bio-interested people to focus on cancer instead of biosecurity or neglected tropical diseases, which doesn’t seem obviously good.
In the case of a cancer researcher looking for funding from an EA organization, there just aren’t many people who have the necessary qualifications to judge their work, because EA is a tiny movement with a lot of young people and few experienced biologists.
The best way for someone who isn’t a very wealthy donor to change this would probably be to write a compelling case for cancer research on the Forum; lots of people read this website, including people with money to spend. Same goes for other causes someone thinks are neglected.
This path has helped organizations like ALLFED and the Happier Lives Institute get more attention for their novel research agendas, and posts with the “less-discussed causes” tag do pretty well here.
As far as I can tell, we’re bottlenecked on convincing arguments that other areas and interventions are worth funding, rather than willingness to consider or fund new areas and interventions for which convincing arguments exist.
*****
Fortunately, there’s good historical precedent here: EA is roughly 12 years old, and has a track record of integrating new ideas at a rapid pace. Here’s my rough timeline (I’d welcome corrections on this):
2007: GiveWell is founded
2009: Giving What We Can is founded, launching the “EA movement” (though the term “effective altruism” didn’t exist yet). The initial focus was overwhelmingly on global development.
2011: The Open Philanthropy Project is founded (as GiveWell Labs). Initial shallow investigations include climate change, in-country migration, and asteroid detection (conducted between 2011 and 2013).
2012: Animal Charity Evaluators is founded.
2013: The Singularity Institute for Artificial Intelligence becomes MIRI
2014: The first EA Survey is run. The most popular orgs people mention as donation targets are (in order) AMF, SCI, GiveDirectly, MIRI, GiveWell, CFAR, Deworm the World, Vegan Outreach, the Humane League, and 80,000 Hours.
To be fair, the numbers look pretty similar for the 2019 survey, though they are dwarfed by donations from Open Phil and other large funders.
Depending on where you count the “starting point”, it took between 5 and 7 years to get from “effective giving should exist” to something resembling our present distribution of causes.
In the seven years since, we’ve seen:
The launch of multiple climate-focused charity recommenders (I’d argue that the Clean Air Task Force is now as well-established an “EA charity” as most of the charities GiveWell recommends)
The rise of wild animal suffering and AI governance/policy as areas of concern (adding a ton of depth and variety to existing cause areas — it hasn’t been that long since “AI” meant MIRI’s technical research and “animal advocacy” meant lobbying against factory farming when those things came up in EA)
The founding of the Good Food Institute (2016) and alternative protein becoming “a thing”
The founding of Charity Entrepreneurship and resultant founding of orgs focused on tobacco taxation, lead abatement, fish welfare, family planning, and other “unusual” causes
Open Philanthropy going from a few million dollars in annual grants to in the neighborhood of ~$200 million. Alongside “standard cause area” grants, 2021 grants include $7 million for the Centre for Pesticide Suicide Prevention, $1.5 million for Fair and Just Prosecution, and $0.6 million for Abundant Housing Massachusetts (over two years — but given that the org has a staff of one person right now, I imagine that’s a good chunk of their total funding)
Three of the ten highest-karma Forum posts of all time (1, 2, 3) discuss cause areas with little existing financial support within EA
I’d hope that all this would also generate a better social environment for people to talk about different types of work — if not, individuals need better habits.
*****
I think that any of these causes could easily get a bunch of interest and support if someone published a single compelling Forum post arguing that putting some amount of funding into an existing organization or intervention would lead to a major increase in welfare. (Maybe not wildlife conservation, because it seems insanely hard for that to be competitive with farmed animal welfare, but I’m open to having my mind blown.)
Until that post exists (or some other resource written with EA principles in mind), there’s not much for a given person in the community to do. Though I do think that individuals should generally try to read more research outside of the EA-sphere, to get a better sense for what’s out there.
If someone is reading this and wants to try writing a compelling post about a new area, I’d be psyched to hear about it!
Or, if you aren’t sure what area to focus on, but want to embrace the challenge of opening a new conversation, I’ve got plenty of suggestions for you (starting here).
*****
I think that very few people in this community would disagree, at least in the example you’ve put forth.
*****
This is where I agree with you, in that I strongly support “letting people feel that what they want to work on is fine” and “not making people feel apologetic about what they do”.
But I’m not sure how many people actually feel this way, or whether the way people respond to them actually generates this kind of feeling. My experience is that when people tell me they work on something unusual, I try to say things like “Cool!” and “What’s that like?” and “What do you hope to accomplish with that?” and “Have you thought about writing this up on the Forum?” (I don’t always succeed, because small talk is an imperfect art, but that’s the mindset.)
I’d strongly advocate for other people in social settings also saying things like this. Maybe the most concrete suggestion from here is for EA groups, and orgs that build resources for them, to encourage this more loudly than they do now? I try to be loud, here and in the EA Newsletter, but I’m one person :-(
*****
I think that the EA community should be a big tent for people who want to do a better job of measuring and increasing their impact, no matter what they work on.
I think that EA research should generally examine a wide range of options in a shallow way, before going deeper on more promising options (Open Phil’s approach). But EA researchers should look at whatever seems interesting or promising to them, as long as they understand that getting funded to pursue research will probably require presenting strong evidence of impact/promise to a funder.
I think that EA funding should generally be allocated based on the best analysis we can do on the likely impact of different work. But EA funders should fund whatever seems interesting or promising to them, as long as they understand that they’ll probably get less impact if they fund something that few other people in the community think is a good funding target. (Value of learning is real, and props to small funders who make grants with a goal of learning more about some area.)
I think that EA advice should try to work out what the person being advised actually wants — is it “have an impactful career in dental hygiene promotion”, or “have an impactful career, full stop”? Is it “save kids from cancer”, or “save kids, full stop”?
And I think we should gently nudge people to consider the “full stop” options, because the “follow your passions wherever they go” argument seems more common in the rest of society than it ought to be. Too many people choose a cause or career based on a few random inputs (“I saw a movie about it”, “I got into this lab and not that lab”, “I needed to pay off my student loans ASAP”) without thinking about a wide range of options first.
But in the end, there’s nothing wrong with wanting to do a particular thing, and trying to have the most impact you can with the thing you do. This should be encouraged and celebrated, whether or not someone chooses to donate to it.
Thank you Aaron for taking the time to write this detailed and thoughtful comment to my post!
I’ll start with saying that I pretty much agree with everything you say, especially in your final remarks—that we should be really receptive to what people actually want and advise them accordingly, and maybe try to gently nudge them into taking a more open-minded general-impact-oriented approach (but not try to force it on them if they don’t want to).
I also totally agree that most EA orgs are doing a fantastic job at exploring diverse causes and ways to improve the world, and that the EA movement is very open-minded to accepting new causes in the presence of good evidence.
To be clear, I don’t criticize specific EA orgs. The thing I do criticize is pretty subtle, and refers more to the EA community itself—sometimes to individuals in the community, but mostly to our collective attitude and the atmospheres we create as groups.
When I say “I think we need to be more open to diverse causes”, it seems that your main answer is “present me with good evidence that a new cause is promising and I’ll support it”, which is totally fair. I think this is the right attitude for an EA to have, but it doesn’t exactly address what I allude to. I don’t ask EAs to start contributing to new unproven causes themselves, but rather that they be open to others contributing to them.
I agree with you that most EAs would not confront a cancer researcher and blame her of doing something un-EA-like (and I presume many would even be kind and approach her with curiosity about the motives for her choice). But in the end, I think it is still very likely she would nonetheless feel somewhat judged. Because even if every person she meets at EA Global tries to nudge her only very gently (“Oh, that’s interesting! So why did you decide to work on cancer? Have you considered pandemic preparedness? Do you think cancer is more impactful?”), those repeating comments can accumulate into a strong feeling of unease. To be clear, I’m not blaming any of the imaginary people who met the imaginary cancer researcher at the imaginary EAG conference for having done anything wrong, because each one of them tried to be kind and welcoming. It’s only their collective action that made her feel off.
I think the EA community should be more welcoming to people who want to operate in areas we don’t consider particularly promising, even if they don’t present convincing arguments for their decisions.
I like this example! It captures something I can more easily imagine happening (regularly) in the community.
One proposal for how to avoid this collective action problem would be for people to ask the same sorts of questions, no matter what area someone works on (assuming they don’t know enough to have more detailed/specific questions).
For example, instead of:
Have you considered X?
Do you think your thing, Y, is more impactful than X?
You’d have questions like:
What led you to work on Y?
And then, if they say something about impact, “Were there any other paths you considered? How did you choose Y in the end?”
What should someone not involved in Y know about it?
What are your goals for this work? How is it going so far?
What are your goals for this event? (If it’s a major event and not e.g. a dinner party)
These should work about equally well for people in most fields, and I think that “discussing the value/promise of an area” conversations will typically go better than “discussing whether a new area ‘beats’ another area by various imperfect measures”. We still have to take the second step at some point as a community, but I’d rather leave that to funders, job-seekers, and Forum commentators.
Depends on the context.
Plenty of people in the EA space are doing their own thing (disconnected from standard paths) but still provide interesting commentary, ask good questions, etc. I have no idea what some Forum users do for work, but I don’t feel the need to ask. If they’re a good fit for the culture and the community seems better for their presence, I’m happy.
The difficulty comes when certain decisions have to be made — whose work to fund, which people are likely to get a lot of benefit from EA Global, etc. At that point, you need solid evidence or a strong argument that your work is likely to have a big impact.
In casual settings, the former “vibe” seems better — but sometimes, I think that people who thrive in casual spaces get frustrated when they “hit a wall” in the latter situations (not getting into a conference, not getting a grant, etc.)
In the end, EA can’t really incorporate an area without having a good reason to do so. I’d be satisfied if we could split “social EA” from “business EA” in terms of how much evidence and justification people are asked for, but we should be transparent about the difference between enjoying the community and looking for career or charity support.
I like your suggestions for questions one could ask a stranger at an EA event!
About “social EA” vs. “business EA”, I think I’d make a slightly different distinction. If you ask for someone else’s (or some org’s) time or money, then of course you need to come up with good explanations for why the thing you are offering (whether it is your employment or some project) is worthwhile. It’s not even a unique feature of EA. But, if you are just doing your own thing and not asking for anyone’s time or money, and just want to enjoy the company of other EAs, then this is the case where I think the EA community should be more welcoming and be happy to just let you be.