ex-CEA
Minimally passive community building work in Malaysia
ex-CEA
Minimally passive community building work in Malaysia
Hi Benjamin, I run EA Virtual Programs. Thanks for sharing about your project! I don’t have a lot of time to think too deeply about your project, but here are my quick impressions (caveat: this is my personal opinion and not of my employer):
1. I worry about fidelity. I know you’re hoping to certification for your university, but the four courses you listed don’t seem relevant.
2. I worry that the “creating more EAs” goal you have might be goodharted.
3. I worry that you’re not tracking risks to the wider movement well. You didn’t mention how your project might impact the EA movement negatively.
Otherwise, it seems like you have some strengths and a good track record in pedagogy and training. This seems like an important skillset to have.
I echo Alex Mallen’s suggestion to talk to more community builders to get a sense of risks and the needs of the wider movement. And I do appreciate that you took time to write down your thoughts!
Another thing I’ve noticed—folks from elite cultures seem less inclined to mix and hangout with non-elite cultures.
Somewhat adjacent to your “culture clash” segment. I’ve noticed folks from “perceived-to-be-higher-status-cultures” hijacking (probably unconsciously) norms or spaces where there are more folks from “perceived-to-be-lower-status-cultures”.
A few people have mentioned about buckets (1, 2) as a way to segment different parts of your life. Each bucket has a corresponding goal or set of goals that you spend resources on. Since we all have many different goals, it’s a useful exercise to distribute resources between them accordingly, so one bucket doesn’t “eat” into another bucket’s resources. For example, you might have a bucket for your close friends, in which you spend a few hours a week of your time to cultivate genuine and happy friendships but not more, since you have other important buckets (e.g. career, health, family, etc).[1]
However, if buckets are not mutually exclusive, collective exhaustible enough, you might encounter issues where you might label activities for the wrong buckets—creating more tension between your different goals.
A corollary to this is my claim that EAs should try to have “serious EA” bucket or “fun EA” bucket.
“Serious EA” means trying to apply EA principles genuinely and taking significant action, like donating to effective charities or working in an EA org.
“Fun EA” means the more casual and social aspect of EA, like going to social meetups or volunteering.
For example, I have a local EA event that I’d like to help out with and spend time with EAs. Sometimes, I accidentally mistake this activity as something from my “serious EA” bucket, and not from my “fun EA” bucket or my general “volunteer for fun” bucket.
How did this happen? Maybe because it’s so easy to default any kind of EA activity as always maximising impact (e.g. I have went all out in EAGs when I should have taken them slightly more casually). Or maybe I want to signal to others that I care about effectiveness (e.g. being a community builder means modelling good applications of EA principles). Or maybe I’m unconsciously working to build status, differentiate the in-group vs out-group, or all of the above.
This can come out in ways that worked against me.
Spending too much resources on volunteering, so I now have less resources for my “serious EA” bucket, and I feel more unhappy about it.
Giving off too much serious and responsible vibes, when it should be a bit more casual and fun.
Newer inspiring EAs might observe and learn that they “should” be more serious, but in the wrong contexts.
Hence, keeping these two buckets separate seem more conducive to having a more productive and happy life. However, I also feel uncertain how useful or true my claim is.
I’ve observed that some people (including me sometimes) are able to have fun and be serious at around the same time, which indicates some fast organic switching of buckets.
I also think that treating certain EA volunteer opportunities as a genuine exercise for people to apply EA principles seriously seems like a good idea. I know some people (including me) who practiced applying EA principles while volunteering, and learned a lot along the way.
Perhaps there are other buckets that should be included.
“Buckets” are just another reframed term that has been used similarly in many other contexts. I’ve first learned about “life areas” from Alex Vermeer.
This might just be an extension of the “community building” aptitudes, but here’s another potential aptitude.
“Education and training” aptitudes
Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.
Examples:
Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc
How to try developing this aptitude:
I’ll split these into three areas: (a) pedagogical knowledge, (b) content knowledge, and (c) operations.
(a) Pedagogical knowledge
This specific knowledge you learn and skills you develop to teach effectively or help others learn more effectively. Examples: breaking down learning objectives into digestible chunks, how to design effective engaging learning experience, creating and presenting content, (EDIT) how to measure whether your students are actually learning .
This could be applied to classroom/workshop settings, reading and discussion groups, career guides, online courses, etc
You can pick up knowledge and skills either—
formally: teaching courses, meta-learning courses, teaching assistant—
or informally: helping others learn
(b) Content knowledge
This is knowledge specific to the domain you want others to learn. If you’re teaching English alphabets, you need to know what it is (symbols that you can rearrange to create meanings and associations with physical or abstract things), why it’s relevant (so you have a similar language with others to learn and communicate with), and how to apply it (“m”+”o”+”m” is mom!).
It’s sometimes not necessary that you’re an expert in this, but it helps a lot if you are above average at it.
(c) Operations
A big (but sometimes forgotten) part of organising classrooms, discussion groups, or workshops is that it needs to smooth (or within an expected parameter) to reduce any friction in the learning experience. It also helps that you understand the different trade-offs of running an education project (i.e. quality of learning vs. student’s capacity vs. educator’s capacity vs. financial cost).
You can pick up knowledge and skills either—
formally: operations courses, project management courses, productivity books
- or informally: learning from “that friend who usually get things done and is generally reliable”
On track?
It’s hard to generalise since there’s so many different models (e.g. classroom, online courses, discussion groups) of how to educate/train a person, and each different model requires a different way of thinking. Here’s my rough take on this:
Level 1: you get positive feedback from others when you had to explain and teach a certain topic informally (e.g. with friends over dinner, homework group, helping students as a teaching assistant during office hours).
Level 2: you get positive feedback when facilitating discussions.
Level 3: you get positive feedback when teaching a workshop.
Level 4 (you’re likely on track here): you get positive feedback when teaching and running a course, online course, or lecture series with more than 50 participants
most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics
Would you mind sharing a bit more of what you mean here?
I’m not sure I understand how an increase in respectability in LessWrong equates to a shrinking overton window. I would have guessed the opposite—an increase in respectability would have shifted or expanded the overton window in ways that are more epistemically desirable. But I feel like I’m missing something here.
Also, I feel appreciative that you’ve shared a bunch of concerns and learnings with us.
Strong upvote
In regards to what I meant by “short term AI capabilities”, I was referring to prosaic AGI—potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
I’ve reread all the write-ups, and you’re right that they don’t imply that “research on short term AI capabilities is potentially impactful in the long term”. I really have jumped the gun there. Thanks for letting me know!
I’ve rephrased the problematic part to the following:
“Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul’s Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good.”
If you feel like this rephrasing is still problematic, please do let me know. I don’t have a strong background in AI alignment research, so I might have misunderstood some parts of it.
Thanks for writing this up! Some rough thoughts about the LMIC category:
1. I think the LMIC is a pretty useful category insofar as it’s used as “non-high-income-countries”.
2. Otherwise, I worry that folks might conflate with LMICs as just “low income countries”, when most countries in the LMIC category are lower to upper middle income (or developing).
3. I have a light preference for separating LMICs into two categories: “least developed countries” and “middle income countries”.
Less tailor-made events and more consistent simple meetups (socials, YT watch parties, etc).
Less tailor-made targeted outreach and more advertising.
Looking at the comments, it seems like CEA has changed a lot over the years!
This may be too broad, but in CEA’s list of team values, what has CEA as a whole done well in? And which ones do you think the team wants to prioritise improving on?
Possibly! Fingers crossed for that. :)
Hi Alexander, thanks for writing this up!
Some context. I used to use Anki for 1-2 years. Completed the “Learning How to Learn” MOOC and read the book it was based on. Taught 13-16 year olds math and English for 2 years. Conducted EA presentations in Malaysia and previously in Singapore. Currently running EA Virtual Programs (I noticed that you’re in the intro program!). FYI, my opinions are mine and not CEA’s.
In conjunction with “learning how to learn better”, “learning how to prioritise which learning strategy works for specific scenarios” seems just as important. It’s really hard to know:
The value of information
The value of easy retrieval of information beforehand.
I think for many of the us, time is likely one of the biggest bottlenecks to better learning. For example, I really really want to apply a lot of the meta-learning tools when I’m reading The Happiness Trap, but I intuitively chose to just do two things only:
Read and take summarised notes.
Write down how I want to practice the ACT therapy techniques from the book.
In my case, I don’t think doing deep learning (e.g. writing notes, creating space repetition notes, reflect, do exercises, discuss, etc) is what I needed considering how busy life is for me now. My end goal is to be more sustainable mental health wise, and I want to apply the tools I’ve read in the book. It seems like the value of information is high here for achieving my goal, but the value of easy retrieval of information is low because I don’t know how I’m going to use it or when I’m going to use it.
But again it’s hard to know whether a certain information is valuable and should be easily retrievable. One failure mode that could happen is not being able to make a connection with something else important because I didn’t do enough deep learning. Like if I didn’t understand the concept of “cognitive fusion” fully, I might be forgoing a potential connection with another therapy technique that can help me better. But it’s really hard to know for sure beforehand.
Applying this to EA VP, I wonder if there are certain key learning outcomes that participants should really internalise and do a lot of deep learning; and, whether there are other learning outcomes that are less important that reading and remembering fuzzy impressions of it is enough for most participants.
That makes me think that we should try to be clear as much as we can with the value of information and the value of easy retrieval of information for most of our learning outcomes so that participants can say, for example, “oh EA VP says X is super important and will likely need it in future work, so I should do more deep learning here. And Y isn’t so important, so I’ll just read it.”
Besides these two things, I wonder if there’s a simpler heuristic for choosing when one should prioritise doing deep learning versus prioritise doing shallow learning. Or something in the middle, which is the likelier case.
Hi Brian! Thank for your response. I’ll be using “we” (as a team) to address most of your comments, and “I” at the end to address one point.
I think it would be a lot better though if you had “problem profiles” like 80,000 Hours’s for those causes you listed, especially the top 2-4 causes.
Yes if there is a case for conducting further research, we are definitely considering deeper research in the top causes, and producing “problem profiles”.
Or if not making full problem profiles, putting a few sentences or bullets about the scale and neglectedness of each of the causes would help.
We realised that our last point at the disclaimer didn’t make clear an additional related issue, which addresses this concern of yours. We didn’t detail which piece of evidence or arguments that made us give a certain score. Technically we did—it’s probably somewhere in our meeting minutes and it’s very messy—hence we’ve decided not to address this issue at this time. However, if we were to conduct another research like this, we definitely want to be better at making explicit our assumptions, evidences, and arguments.
The 2 that I think are very questionable though are financial literacy and improving diversity and inclusion. I don’t see why these two could be in the top 8 causes for Malaysia. Maybe one of you could make the case for why these two causes are very impactful to work on, especially compared to other alternatives I list below?
We actually found a huge variance of scores for the above two causes areas in both the initial ranking stage and weighted factor model stage. So some of us in our team do agree with you that these cause areas shouldn’t be in the top 8. It also might be the case that we didn’t brainstorm enough cause areas that may reach the top 8.
As a side note, most of us in our team have a lot of strong feelings with diversity and inclusion issues in Malaysia (although some of us did put a lower score for this cause area, we weren’t that surprised it made it in the top 8). In a nutshell, issues of race and religion are often used as a dividing force within Malaysia at the legislative, political and social level in much of Malaysia’s modern history.
On a personal note, I wouldn’t be surprised if these two cause areas actually do drop out in the next iteration of research (unless there’s really convincing evidence of a cost-effective intervention).
Would love to check out EA PH’s cause prioritisation report soon! :)
That’s great! Thanks again for the feedback.
Agree with this! I can definitely see that there’s some kind of fine tuning you can do, like making it less challenging so your motivation and probability of success goes up.
A low-energy version of this could be a co-working retreat
Oh interesting! I see a few examples of this when Googling. If you have a go-to resource for organising this, would love to check it out.
An animal welfare one! But more heavily modified to an amateur philosophy audience.
(Weakly held personal opinion) I would go further and say that you attract people like you.[1]If what you or your core group is signalling most to outsiders are your community building (or marketing) qualities, you’re likely to attract folks who are also keen on community building (and put off folks who are likely keen on the object level work you’re recruiting for).
Here’s an intuition pump I have. Imagine two EA uni group websites that are exactly the same except for one difference in their profile page:
Website A showcases students who have internships in orgs solving x-risk issues, has co-published a paper on cost-effective poverty interventions, wrote a series of blog posts on effective animal advocacy, etc.
Website B showcases students who have basically none of the above
I feel pretty confident that A will attract the right kinds of people into EA.
I also feel somewhat confident that B will be a net negative. I could imagine that each cohort of students coming into B gets worse in quality each year, until it becomes “ponzi scheme’ish” entity.
https://medium.com/@michelbachmann/start-with-who-15b8857ed718