ex-CEA
Minimally passive community building work in Malaysia
ex-CEA
Minimally passive community building work in Malaysia
Hmm you’re right, thanks for catching this. I think I probably have some false prior that culture warm terms are more affiliated with internet memes? But yeah, makes sense that a term could be both.
It can be hard to parse between what we view as the gradient between reasonable and unreasonable assumptions
Thanks for bringing this up, I think I did significantly update to being a bit more okay with folks being ignorant, or having false assumptions. It’s tough! I would probably make the same mistakes too and would want some space to fumble around and correct myself.
Hmm I think you’re right! I don’t think my advice doesn’t seem to be solving this issue.
Perhaps a better advice is to just read more about the norms of the country first? And expect such poor CCIs to happen and allow for space to have meta-conversations around what’s appropriate?
Ah, thanks for pointing that out.
And I appreciate the praise!
I also feel somewhat confused too. I agree that e.g., Black African Americans are probably pretty close to the category of “West”. At the time, I was thinking about the trade off between including people who are sort of in the middle of West and non-West, and having a clearer demarcation between West vs non-West to reduce noise.
I don’t think I have a lot of strong reasons, but I thought the clearer demarcation is more important. If folks disagree about my decision, happy to hear more!
Got it, this was helpful. Thanks!
most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics
Would you mind sharing a bit more of what you mean here?
I’m not sure I understand how an increase in respectability in LessWrong equates to a shrinking overton window. I would have guessed the opposite—an increase in respectability would have shifted or expanded the overton window in ways that are more epistemically desirable. But I feel like I’m missing something here.
Also, I feel appreciative that you’ve shared a bunch of concerns and learnings with us.
Another thing I’ve noticed—folks from elite cultures seem less inclined to mix and hangout with non-elite cultures.
Somewhat adjacent to your “culture clash” segment. I’ve noticed folks from “perceived-to-be-higher-status-cultures” hijacking (probably unconsciously) norms or spaces where there are more folks from “perceived-to-be-lower-status-cultures”.
Thanks for writing this up! Some rough thoughts about the LMIC category:
1. I think the LMIC is a pretty useful category insofar as it’s used as “non-high-income-countries”.
2. Otherwise, I worry that folks might conflate with LMICs as just “low income countries”, when most countries in the LMIC category are lower to upper middle income (or developing).
3. I have a light preference for separating LMICs into two categories: “least developed countries” and “middle income countries”.
A few people have mentioned about buckets (1, 2) as a way to segment different parts of your life. Each bucket has a corresponding goal or set of goals that you spend resources on. Since we all have many different goals, it’s a useful exercise to distribute resources between them accordingly, so one bucket doesn’t “eat” into another bucket’s resources. For example, you might have a bucket for your close friends, in which you spend a few hours a week of your time to cultivate genuine and happy friendships but not more, since you have other important buckets (e.g. career, health, family, etc).[1]
However, if buckets are not mutually exclusive, collective exhaustible enough, you might encounter issues where you might label activities for the wrong buckets—creating more tension between your different goals.
A corollary to this is my claim that EAs should try to have “serious EA” bucket or “fun EA” bucket.
“Serious EA” means trying to apply EA principles genuinely and taking significant action, like donating to effective charities or working in an EA org.
“Fun EA” means the more casual and social aspect of EA, like going to social meetups or volunteering.
For example, I have a local EA event that I’d like to help out with and spend time with EAs. Sometimes, I accidentally mistake this activity as something from my “serious EA” bucket, and not from my “fun EA” bucket or my general “volunteer for fun” bucket.
How did this happen? Maybe because it’s so easy to default any kind of EA activity as always maximising impact (e.g. I have went all out in EAGs when I should have taken them slightly more casually). Or maybe I want to signal to others that I care about effectiveness (e.g. being a community builder means modelling good applications of EA principles). Or maybe I’m unconsciously working to build status, differentiate the in-group vs out-group, or all of the above.
This can come out in ways that worked against me.
Spending too much resources on volunteering, so I now have less resources for my “serious EA” bucket, and I feel more unhappy about it.
Giving off too much serious and responsible vibes, when it should be a bit more casual and fun.
Newer inspiring EAs might observe and learn that they “should” be more serious, but in the wrong contexts.
Hence, keeping these two buckets separate seem more conducive to having a more productive and happy life. However, I also feel uncertain how useful or true my claim is.
I’ve observed that some people (including me sometimes) are able to have fun and be serious at around the same time, which indicates some fast organic switching of buckets.
I also think that treating certain EA volunteer opportunities as a genuine exercise for people to apply EA principles seriously seems like a good idea. I know some people (including me) who practiced applying EA principles while volunteering, and learned a lot along the way.
Perhaps there are other buckets that should be included.
“Buckets” are just another reframed term that has been used similarly in many other contexts. I’ve first learned about “life areas” from Alex Vermeer.
Hi Benjamin, I run EA Virtual Programs. Thanks for sharing about your project! I don’t have a lot of time to think too deeply about your project, but here are my quick impressions (caveat: this is my personal opinion and not of my employer):
1. I worry about fidelity. I know you’re hoping to certification for your university, but the four courses you listed don’t seem relevant.
2. I worry that the “creating more EAs” goal you have might be goodharted.
3. I worry that you’re not tracking risks to the wider movement well. You didn’t mention how your project might impact the EA movement negatively.
Otherwise, it seems like you have some strengths and a good track record in pedagogy and training. This seems like an important skillset to have.
I echo Alex Mallen’s suggestion to talk to more community builders to get a sense of risks and the needs of the wider movement. And I do appreciate that you took time to write down your thoughts!
(Weakly held personal opinion) I would go further and say that you attract people like you.[1]If what you or your core group is signalling most to outsiders are your community building (or marketing) qualities, you’re likely to attract folks who are also keen on community building (and put off folks who are likely keen on the object level work you’re recruiting for).
Here’s an intuition pump I have. Imagine two EA uni group websites that are exactly the same except for one difference in their profile page:
Website A showcases students who have internships in orgs solving x-risk issues, has co-published a paper on cost-effective poverty interventions, wrote a series of blog posts on effective animal advocacy, etc.
Website B showcases students who have basically none of the above
I feel pretty confident that A will attract the right kinds of people into EA.
I also feel somewhat confident that B will be a net negative. I could imagine that each cohort of students coming into B gets worse in quality each year, until it becomes “ponzi scheme’ish” entity.
Hi Pride, I’m Yi-Yang and I run EA VP. Unfortunately, we don’t usually let folks apply late. EA VP does run programs every month so you could catch the upcoming one. The next deadline is on Sun, June 26th.
Strong upvote
A service/consultancy that calculates the value of information of research projects
Epistemic Institutions, Research That Can Help Us Improve
When undertaking any research or investigations, we want to know whether it’s worth spending money or time on it. There are a lot of research-type projects in EA and the best way to evaluate and prioritise them is to calculate their value of information (VOI). However, VoI calculations can be complex and we need to build a team of experts that can form a VoI consultancy or service provider.
Examples of use cases:
1. A grant maker wants to know whether it’s worth spending 0.5FTE on investigating cause area Y vs cause area X.
2. A thinktank has generated a list of policy ideas to investigate but is uncertain which to prioritise.
3. A research org also has a list of research questions but want to know which one has the highest VoI.
In each of this use case, I suspect a VoI consultancy can be extremely valuable.
David Manheim has written more about VoI here.
I think there might be harder meta-problem: should we even spend time and money on calculating the VoI of certain investigations? A failure mode is where the VoI consultancy calculates a bunch of research projects that turn out to have very low VoI.
I guess figuring out baseline, the cost of doing VoI calculations, and having a cheap heuristic as a preliminary calculation could help, but I’m highly uncertain.
Hi Naomi! Do the participants engage with any required learning materials outside of group discussions in this version of the fellowship? Something like the usual 8 week virtual programs version.
Agree with this! I can definitely see that there’s some kind of fine tuning you can do, like making it less challenging so your motivation and probability of success goes up.
(1), (2) great points!
(3) Possibly, I definitely took some inspiration from 80K’s career planning guide too.
Thanks for the comment! Not sure if you’ve seen this, but there’s weak evidence that poor CCIs occur less frequently in EA settings than non-Ea settings.