Create a new type of post—a “First Draft” post, with it’s own section “WIP”. Basically like the current collaborative draft mode editing, but public.
This could be a expansion / continuation of the “amnesty day” posts, but more ongoing and more focused on changing the culture of the post.
Looks like a google doc with easy commenting on specific sections, maybe more voting options that have to do with feedback (e.g. needs more structure etc.)
You can give suggestions on what people can post e.g. “Idea bunny” “post outline” “unpolished draft” “polished draft” and give people options on the kinds of feedback they could seek e.g. “copyediting / grammar” or “tone” or “structure” or “factchecking” etc.
Maybe Karma-free, or separate karma score so people don’t worry about how it’ll be taken
Maybe people who give comments and feedback can get some kind of “helper karma” and be automatically tagged when the post is published and get credit of some kind for contributing to common knowledge
Potentially have it be gated in some way or have people opt-in to see it (e.g. so more engaged people opt-in, so it becomes like the Facebook peer-editing group), with regular pushes to get high karma / high engagement forum users (especially lurkers who like and read a lot) to join
Private by default (not searchable on web) but very clear that it’s not private private (e.g. since in practice people can always screenshot and share things anyways)
Feature interesting stories about where first drafts start and the posts they become to encourage usage
Get a bunch of high-status people / active forum folks to post their drafts to get the ball rolling
You’ve written us quite the feature spec there. I’m not opposed to ambitious suggestions (at all! for real! though it is true that they’re less likely to happen), but I would find this one if it were written in the style of a user problem. I am un-embarassed to ask you for this extra product work because I know you’re a product manager. (That said, I’d understand if you didn’t want to spend any time on it without a stronger signal from us of how likely we are to act on it.)
Many EAs primarily experience EA online (both initially and as they progress on their EA journeys).
There are limited opportunities for people to practice EA principles online
The forum is visited by many people
The forum should be a place where people can actively practice EA principles
Specifically, it can be a place collaborative truthseeking happens, but it isn’t really a place for that. Instead, it’s more often a place to share the results of collaborative truthseeking
Truthseeking involves:
Being wrong
Saying dumb / naïve things
Making mistakes
Appearing less intelligent than you are
Asking questions of people
Challenging people (of higher status / position than you)
Saying you were wrong publicly
Not getting defensive and being open to criticism
The forum doesn’t feel like a place where people can do those things today without some degree of reputational / career harm (or unless they invest a lot of time in legibly explaining themselves / demonstrating they’ve updated)
There are poor incentives for people to help each other collaboratively truth-seek on the Forum today. The forum can sometimes feels competitive or critical, rather than collaborative and supportive
I’ve commented previously during the era of Forum competition posts, that it would be nice to recognize people helping each other
Edo makes the nice comment that the strategy session is one of the few forum posting events that’s not explicitly competitive
This is a current pratice users already (in the product question of are you replacing an existing practice or introducing a new one—it’s easier to replace a new practice because it’s more likely to be used).
1) There is already a strong culture of sharing your 101 google docs with a ever changing list of reviewers, commentors etc. I’m sure we’ve all seen at least 20-30 docs like these over time.
2) There are also some coordination attempts like the facebook group for editing & review
I think current solutions miss a lot of the value . I think the forum could do it better.
Better information flow & sharing
Less networking-walling (like paywalling but you’re limited by your network not ability to pay)
Lets more people see and give helpful comments
Lets more appropriate people give better comments (by incentivizing / rewarding help)
Explicitly build the forum as a place where people can feel comfortable exploring the world / being wrong together
Not OP but here are some “user problems” either I have or am pretty sure a bunch of people have:
Lots of latent, locked up insight/value in drafts
Implicitly high standards discourage posting these as normal posts, which is good for avg post quality and bad for total quality
Would want to collaborate on either an explicit idea or something tbd, but making this happen as is takes a bunch of effort
Reduces costs to getting and giving feedback
Currently afaik there’s no market where feedback buyers and sellers can meet—just ad hoc Google doc links
In principle you can imagine posts almost being written like a Wikipedia page: lots and lots of editors and commenters contributing a bit here and there
Here’s a post of mine that should be a public draft, for example. But as things stand I’d rather it be a shitty public post than a probably-perpetual private draft (if anyone wants to build on it, go to town!)
I’ve heard a new anecdote from someone who’s actively working on a AI research project who feels less connected to relevant people in their domain to get feedback on it.
(also would love to hear what doubts & hesitations you have / red-team this idea more—I think devil’s definitely in the details and there are lots of interesting MVP’s here)
hehe you know i like to give y’all some things to do! would be interested to know how likely you’d be to act on it, but also happy to expand since it’s not a big lift. Not linking to all the stuff I mention to save some time.
Here’s my hypothesis of the user problem :
Personal statement (my own case)
I often want to write a post, but struggle to get it over the line, and so it remains in a half baked state and not shared with people
I want to find the “early adopters” of my posts to give me lots of feedback when my personal circles may not have the right context and/or I don’t know who from my personal circles is available to look at a draft.
(there’s a big cognitive load / ugh field / aversion in general here e.g. whenever you have to go through a list of people to curate for e.g. inviting them to an event or asking people for favors.)
Sometimes it can be good to release things into the world even if they are low quality because then they’re out of your system and you can focus on other, more valuable ideas.
Personal Experiment: I’ve posted a draft on twitter and get some of this effect (where people not in my radar read and engage with stuff). This is mostly really good.
But, as might be obvious, it’s really not a good forum for sharing thoughts longer than 2 paragrapsh.
Sometimes I don’t know what posts or ideas will resonate with people, and it’s nice to find that out early. Also, I am better able to take feedback when I haven’t invested a bunch of time in polishing and editing a draft
I also just want to share a lot of thoughts that I don’t think are post-level quality but are also a bit more thought through than shortforms (without losing the benefit of in-line commenting, suggest mode—essentially, the UX of google docs)
I found this post by Rob Bensinger of anonymous comments on EA from 2017 , with the question prompt:
If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.].
Many still resonate today. I recommend reading the whole list, but there are a lot—so I’ve chosen a few highlights and comment exchanges I thought were particularly interesting. I’ve shortened a few for brevity (indicated by ellipses).
I don’t agree with many of these comments, but it’s interesting to see how people perceived things back then.
#28 - on people dismissing those who start as “ineffective altruists” (top voted comment with 23 karma)
I have really positive feelings towards the effective altruism community on the whole. I think EA is one of the most important ideas out there right now.
However, I think that there is a lot of hostility in the movement towards those of us who started off as ‘ineffective altruists,’ as opposed to coming from the more typical Silicon Valley perspective. I have a high IQ, but I struggled through college and had to drop out of a STEM program as a result of serious mental health disturbances. After college, I wanted to make a difference, so I’ve spent my time since then working in crisis homeless shelters. … I know that the work I’ve done isn’t as effective as what the Against Malaria Foundation does, but I’ve still worked really hard to help people, and I’ve found that my peers in the movement have been very dismissive of it.
I’m really looking to build skills in an area where I can do more effective direct work. I keep hearing that the movement is talent-constrained, but it isn’t clearly explained anywhere what the talent constraints are, specifically. I went to EA Global hoping for career advice—an expensive choice for someone in social work! -- but even talking one-on-one with Ben Todd, I didn’t get any actionable advice. There’s a lot of advice out there for people who are interested in earning to give, and for anyone who already has great career prospects, but for fuck-ups like me, there doesn’t seem to be any advice on skills to develop, how to go back to school, or anything of that kind.
When I’ve tried so hard to get any actionable advice whatsoever about what I should do, and nobody has any… that’s a movement that isn’t accessible to me, and isn’t accessible to a lot of people, and it makes me want to ragequit. …
#40 - the community should be better at supporting its members
I’m the leader of a not-very-successful EA student group. I don’t get to socialize with people in EA that much.
I wish the community were better at supporting its members in accomplishing things they normally couldn’t. I feel like almost everyone just does the things that they normally would. People that enjoy socializing go to meetups (or run meetups); people that enjoy writing blog posts write blog posts; people that enjoy commenting online comment online; etc.
Very few people actually do things that are hard for them, which means that, for example, most people aren’t founding new EA charities or thinking original thoughts about charity or career evaluation or any of the other highly valuable things that come out of just a few EA people. And that makes sense; it doesn’t work to just force yourself to do this sort of thing. But maybe the right forms of social support and reward could help.
I think that mentorship and guidance are lacking and undervalued in the EA community. This seems odd to me. Everyone seems to agree that coordination problems are hard, that we’re not going to solve tough problems without recruiting additional talent, and that outreach in the “right” places would be good. Functionally, however, most individuals in the community, most organizations, and most heads of organizations seem to act as though they can make a difference through brute force alone.
I also don’t get the impression that most EA organizations and heads of EA organizations are keen on meeting or working with new and interested people. People affiliated with EA write many articles about increasing personal productivity; I have yet to read a single article about increasing group effectiveness.
80,000 Hours may be the sole exception to this rule, though I haven’t formally gone through their coaching program, so I don’t know what their pipeline is like. CFAR also seems to be addressing some of these issues, though their workshops are still prohibitively expensive for lots of people, especially newcomers. EA outreach is great, but once people have heard about EA, I don’t think it’s clear what they should do or how they should proceed.
The final reason why I find this odd is because in most professional settings, mentorship is explicitly valued. Even high-status people who have plenty of stuff on their plate will set aside some time for service.
My model for why this is happening has two parts. First, I think there is some selection effect going on; most people in EA are self-starters who came on board and paved their own path. (That’s great and all, but do people think that most major organizations and movements got things done solely by a handful of self-starters trying to cooperate?)
Second, I think it might be the case that most people are good at doing cost-benefit analyses on how much impact their pet project will have on the world, but aren’t thinking about the multiplier effect they could have by helping other people be effective. (This is often because they are undervaluing the effectiveness of other, relatively not-high-status people.)
Reply from Daniel Eth:
Another possibility is that most people in EA are still pretty young, so they might not feel like they’re really in a position to mentor anyone.
My system-1 concerns about EA: the community exhibits a certain amount of conformism, and a general unwillingness to explore new topics. … The reason I think this is an issue is the general lack of really new proposals in EA discussion posts. … The organization that seemed to me the most promising for dealing with unknown unknowns (CFAR, who are in a unique position to develop new thinking techniques to deal with this) has recently committed to AI risk in a way that compromises the talent they could have directed to innovative EA.
Many practitioners strike me as being dogmatic and closed-minded. They maintain a short internal whitelist of things that are considered ‘EA’—e.g., working at an EA-branded organization, or working directly on AI safety. If an activity isn’t on the whitelist, the dogmatic (and sometimes wrong) conclusion is that it must not be highly effective. I think that EA-associated organizations and AI safety are great, but they’re not the only approaches that could make a monumental difference. If you find yourself instinctively disagreeing, then you might be in the group I’m talking about. :)
People’s natural response should instead be something like: ‘Hmm, at first blush this doesn’t seem effective to me, and I have a strong prior that most things aren’t effective, but maybe there’s something here I don’t understand yet. Let’s see if I can figure out what it is.’
Level of personal involvement in effective altruism: medium-high. But I wouldn’t be proud to identify myself as EA.
#39 - on EA having picked all the low-hanging fruit
Level of involvement: I’m not an EA, but I’m EA-adjacent and EA-sympathetic.
EA seems to have picked all the low-hanging fruit and doesn’t know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It’s hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it’s hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.
The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there’s not much new to say to change people’s minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren’t any.)
Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin’s concerns), and value drift.
And others (there’s overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.
What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there’s not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it’s stuck as it is, at best—which is discouraging, but if that’s the reality, EAs should accept it.
#32 - on the limtiations of single orgs fixing problems
There seems to be a sense in effective altruism that the existence of one organization working on a given problem means that the problem is now properly addressed. The thought appears to be: ‘(Organization) exists, so the space of evaluating (organization function) is filled and the problem is therefore taken care of.’
Organizations are just a few people working on a problem together, with some slightly better infrastructure, stable funding, and time. The problems we’re working on are too big for a handful of people to fix, and the fact that a handful of people are working in a given space doesn’t suggest that others shouldn’t work on it too. I’d like to see more recognition of the conceptual distinction between the existence of an organization with a certain mission, and what exactly is and is not being done to accomplish that mission. We could use more volunteers/partners to EA organizations, or even separate organizations addressing the same issue(s) using a different epistemology.
To encourage this, I’d love to see more support for individuals doing great projects who are better suited to the flexibility of doing work independently of any organization, or who otherwise don’t fit a hole in an organization.
#32b) - on EA losing existing and failing to gain new high-value people
The high-value people from the early days of effective altruism are disengaging, and the high-value people who might join are not engaging. There are people who were once quite crucial to the development of EA ‘fundamentals’ who have since parted ways, and have done so because they are disenchanted with the direction in which they see us heading.
More concretely, I’ve heard many reports to the effect: ‘EA doesn’t seem to be the place where the most novel/talented/influential people are gravitating, because there aren’t community quality controls.’ While inclusivity is really important in most circumstances, it has a downside risk here that we seem to be experiencing. I believe we are likely to lose the interest and enthusiasm of those who are most valuable to our pursuits, because they don’t feel like they are around peers, and/or because they don’t feel that they are likely to be socially rewarded for their extreme dedication or thoughtfulness.
I think that the community’s dip in quality comes in part from the fact that you can get most of the community benefits without being a community benefactor—e.g. invitations to parties and likes on Facebook. At the same time, one incurs social costs for being more tireless and selfless (e.g., skipping parties to work), for being more willing to express controversial views (e.g., views that conflict with clan norms), or for being more willing to do important but low-status jobs (e.g., office manager, assistant). There’s a lot that we’d need to do in order to change this, but as a first step we should be more attentive to the fact that this is happening.
On the Bay Area community
#18 - on improving status in the Bay Area community so people feel less insecure
Speaking regarding the Bay Area effective altruism community: There’s something about status that could be improved. On the whole, status (and what it gets you) serves a valuable purpose; it’s a currency used to reward those producing what the community values. The EA community is doing well at this in that it does largely assign status to people for the right things. At the same time, something about how status is being done is leaving many people feeling insecure and disconnected.
I don’t know what the solution is, but you said magic wand, so I’ll punt on what the right response should be.”
If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.
If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we’d have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to the Bay, the harder it is to be outside the Bay, because of the lack of community. If everyone cooperated in developing relatively local communities, rather than moving to the bay, there’d be no need to move to the Bay in the first place. But we, a community that fangirls over ‘Meditations on Moloch’ (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and prides itself on working together to get shit done, can’t even cooperate on this simple thing.
I know people who are heartbroken and depressed because they need community and all their partners are in the Bay and they want to contribute, but they can’t get a US visa or they can’t afford Bay Area rent levels, so they’re stuck friendless and alone in whatever shitty place they were born in. This should not be a hard problem to solve if we apply even a little thought and effort to it; any minimally competent community could pull this off.
The way that we talk about policy in the effective altruism community is unsophisticated. I understand that this isn’t most EAs’ area of expertise, but in that case just running around and saying ‘we should really get EAs into policy’ is pretty unhelpful. Anyone who is fairly inexperienced in ‘policy’ could quickly get a community-knowledge comparative advantage just by spending a couple of months doing self-study and having conversations, and could thereby start helpfully orienting our general cries for more work on ‘policy.’
To be fair, there are some people doing this. But why not more?
On newcomers
#3 - talking about MIRI to newcomers makes you seem biased
Stop talking about AI in EA, at least when doing EA outreach. I keep coming across effective altruism proponents claiming that MIRI is a top charity, when they seem to be writing to people who aren’t in the EA community who want to learn more about it. Do they realize that this comes across as very biased? It makes it seem like ‘I know a lot about an organization’ or ‘I have friends in this organization’ are EA criteria. Most importantly, talking about AI in doomsday terms sounds kooky. It stands apart from the usual selections, as it’s one of the few that’s ‘high stakes.’ I rarely see effective altruists working towards environmental, political, anti-nuclear, or space exploration solutions, which I consider of a similar genre. I lose trust in an effective altruist’s evaluations when they evaluate MIRI to be an effective charity.
I’ve read a few articles and know a few EA people.
I work for an effective altruism organization. I’d say that over half of my friends are at least adjacent to the space and talk about EA-ish topics regularly.
The thing I’d most like to change is the general friendliness of first-time encounters with EA. I think EA Global is good about this, but house parties tend to have a very competitive, emotionally exhausting ‘everyone is sizing you up’ vibe, unless you’re already friends with some people from another context.
Next-most-important (and related), probably, is that I would want everyone to proactively express how much confidence they have their statements in some fashion, through word choice, body language, and tone of voice, rather than providing a numerical description only when explicitly asked. This can prevent false-consensus effects and stop people from assuming that a person must be totally right because they sound so confident.
More selfishly, another thing I wish for is more social events that consist of 10-20 people doing something in the daytime with minimal drugs, rather than 50-100 people at a wild party. I just enjoy small daytime gatherings so much more, and I would like to get closer to the community, but I rarely have the energy for parties.
#5 - lack of good advice to newcomers beyond donate & advocate
At multiple EA events that I’ve been to, new people who were interested and expressed curiosity about what to do next were given no advice beyond ‘donate money and help spread the message’—even by prominent EA organizers. My advice to the EA community would be to stop focusing so much on movement-building until (a) EA’s epistemics have improved, and (b) EAs have much more developed and solid views (if not an outright consensus) about the movement’s goals and strategy.
To that end, I recommend clearly dividing ‘cause-neutral EA’ from ‘cause-specific effectiveness’. The lack of a clear divide contributes to the dilution of what EA means. (Some recent proposals I’ve seen framed by people as ‘EA’ have included a non-profit art magazine and a subcommunity organized around fighting Peter Thiel.) If we had a notion of ‘in this space/forum/organization, we consider the most effective thing to do given that one cares primarily about art’ or ‘given that one is focused on ending Alzheimer’s, what is the most effective thing to do?‘, then people could spend more time seriously discussing those questions and less bickering over what counts as ‘EA.’
The above is if we want a big-tent approach. I’m also fine with just cause-neutral evaluation and the current-seemingly-most-important-from-a-cause-neutral-standpoint causes being deemed ‘EA’ and all else clearly being not, no matter who that makes cranky.
#23 - on bait & switch, EA as principles, getting an elite team at meta orgs like CFAR / CEA
I used to work for an organization in EA, and I am still quite active in the community.
1 - I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but—wink, nod—that’s just what we do to get people in the door so that we can convert them to helping out with AI / animal suffering / (insert weird cause here).’ This disturbs me.
2 - In general, I think that EA should be a principle, not a ‘movement’ or set of organizations. I see no reason that religious charities wouldn’t benefit from exposure to EA principles, for example.
3 - I think that the recent post on ‘Ra’ was in many respects misguided, and that in fact a lack of ‘eliteness’ (or at least some components of it) is one of the main problems with many EA organizations.
There’s a saying, I think from Eliezer, that ‘the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.’ That saying is true, but people seem to use this as an excuse sometimes. There’s not really any reason for EA organizations to be as unprofessional and inefficient as they are. I’m not saying that we should all be nine-to-fivers, but I’d be very excited to see the version of the Centre for Effective Altruism or the Center for Applied Rationality that cared a lot about being an elite team that’s really actually trying to get things done, rather than the version that’s sorta ad-hoc ‘these are the people who showed up.’
4 - Things are currently spread over way too many sources: Facebook, LessWrong, the EA Forum, various personal blogs, etc.
Rob Bensinger replied:
I’d be interested to hear more about examples of things that CEA / CFAR / etc. would do differently if they were ‘an elite team that’s really actually trying to get things done’; some concreteness there might help clarify what the poster has in mind when they say there are good things about Ra that EA would benefit from cultivating.
For people who haven’t read the post, since it keeps coming up in this thread: my impression is that ‘Ra’ is meant to refer to something like ‘impersonal, generic prestige,’ a vague drive toward superficially objective-seeming, respectable-seeming things. Quoting Sarah’s post:
“Ra involves seeing abstract, impersonal institutions as more legitimate than individuals. For instance, I have the intuition that it is gross and degrading to pay an individual person to clean your house, but less so to hire a maid service, and still less so if a building that belongs to an institution hires a janitor. Institutions can have authority and legitimacy in a way that humans cannot; humans who serve institutions serve Ra.
“Seen through Ra-goggles, giving money to some particular man to spend on the causes he thinks best is weird and disturbing; putting money into a foundation, to exist in perpetuity, is respectable and appropriate. The impression that it is run collectively, by ‘the institution’ rather than any individual persons, makes it seem more Ra-like, and therefore more appealing. [...]
“If Horus, the far-sighted, kingly bird, represents “clear brightness” and “being the rightful and just ruler”, then Ra is a sort of fake version of these qualities. Instead of the light that distinguishes, it’s the light too bright to look at. Instead of clear brightness, it’s smooth brightness.
“Instead of objectivity, excellence, justice, all the “daylight” virtues associated with Horus (what you might also call Apollonian virtues), Ra represents something that’s also shiny and authoritative and has the aesthetic of the daylight virtues, but in an unreal form.
“Instead of science, Ra chooses scientism. Instead of systematization and explicit legibility, Ra chooses an impression of abstract generality which, upon inspection, turns out to be zillions of ad hoc special cases. Instead of impartial justice, Ra chooses a policy of signaling propriety and eliteness and lack of conflicts of interest. Instead of excellence pointed at a goal, Ra chooses virtuosity kept as an ornament.
“(Auden’s version of Apollo is probably Ra imitating the Apollonian virtues. The leadership-oriented, sunnily pragmatic, technological approach to intellectual affairs is not always phony — it’s just that it’s the first to be corrupted by phonies.)
“Horus is not Ra. Horus likes organization, clarity, intelligence, money, excellence, and power — and these things are genuinely valuable. If you want to accomplish big goals, it is perfectly rational to seek them, because they’re force multipliers. Pursuit of force multipliers — that is, pursuit of power — is not inherently Ra. There is nothing Ra-like, for instance, about noticing that software is a fully general force multiplier and trying to invest in or make better software. Ra comes in when you start admiring force multipliers for no specific goal, just because they’re shiny.
“Ra is not the disposition to seek power for some goal, but the disposition to approve of power and to divert it into arbitrariness. It is very much NOT Machiavellian; Machiavelli would think it was foolish.”
Nick Tarleton replied:
Huh. I really like and agree with the post about Ra, but also agree that there are things about… being a grown-up organization?… that some EA orgs I’m aware of have been seriously deficient in in the past. I don’t know whether some still are; it seems likely a priori. I can see how a focus on avoiding Ra could cause neglect of those things, but I still think avoiding Ra is critically important, it just needs to be done smarter than that. (Calling the thing ‘eliteness’, or positively associating it with Ra, feels like a serious mistake, though I can’t articulate all of my reasons why, other than that it seems likely to encourage focusing on image over substance. I think calling it ‘grown-upness’ can encourage that as well, and I don’t know of a framing that wouldn’t (this is an easy thing to mistake image for / do fronting about, and focusing on substance over image seems like an irreducible skill / mental posture), but ‘eliteness’ feels particularly bad. ‘Professionalism’ feels in between.)
Anonymous #23 replied:
CEA’s internal structure is very ad-hoc and overly focused on event planning and coordination, at least in my view. It also isn’t clear that what they’re doing is useful. I don’t really see the value add of CEA over what Leverage was doing back when Leverage ran the EA Summit.
Most of the cool stuff coming out of the CEA-sphere seems to be done by volunteers anyway. This is not to denigrate their staff, just to question ‘Where’s the beef?’ when you have 20+ people on the team.
For that matter, why do conversations like these mostly happen on meme groups and private Facebook walls instead of being facilitated or supported by CEA?
Looking at the CFAR website, it seems like they have something like 14-15 employees, contractors, and instructors, of which only 3-4 have research as part of their job? That’s… not a good ratio for an organization with a mission that relies on research, and maybe this explains why there hasn’t been too much cool new content coming out of that sector?
To put things another way, I don’t have a sense of rapid progress being made by these organizations, and I suspect that it could be with the right priorities. MIRI certainly has its foibles, but if you look over there it seems like they’re much more focused/productive, and it’s readily apparent how each of their staffers contributes to the primary objective. Were I to join MIRI, I think I would have a clear sense of, ‘Here I am, part of a crack team working to solve this big problem. Here’s how we’re doing it.’ I don’t get that sense from any other EA organizations.
As for ‘Ra,’ it’s not that I think fake prestige is good; it’s that I think people way overcorrect, shying away from valid prestige in the name of avoiding fake prestige. This might be a reflection of the Bay Area and Oxford ‘intellectual techie’ crowds more than EA in general, but it’s silly any way you slice it.
I want an EA org whose hiring pitch is: ‘We’re the team that is going to solve (insert problem), and if you join us everyone you work with will be smart, dedicated, and hardworking. We don’t pay as much as the private sector, but you’ll do a ton more, with better people, more autonomy, and for a better cause. If that sounds good, we’d love to talk to you.’
This is a fairly ‘Ra’-flavored pitch, and obviously it has to actually be true, but I think a lot of EAs shy away from aiming for this sort of thing, and instead wind up with a style that actually favors ‘scrappiness’ and ‘we’re the ones who showed up.’ I bet my pitch gets better people.
Reflecting on the question of CEA’s mandate, I think it’s challenging that CEA has always tried to be both, and this has not worked out well.
1) a community org
2) a talent recruitment org
When you’re 1) you need to think about the individual’s journey in the movement. You invest in things like community health and universal groups support. It’s important to have strong lines of communication and accountability to the community members you serve. You think about the individual’s journey and how to help addres those issues. (Think your local Y, community center or church)
When you’re 2) you care about finding and supporting only the top talent (and by extension actors that aid you in this mission). You care about having a healthy funnel of individuals who are at the top of their game. You care about fostering an environment that is attractive (potentially elite), prestigious and high status. (Think Y-Combinator, Fullbright or Emergent Ventures Fellows).
I think these goals are often overlapping and self-reinforcing, but also at odds with each other.
It is really hard to thread that needle well—it requires a lot of nuanced, high-fidelity communication—which in turn requires a lot of capacity (something historically short-of-stock in this movement).
I don’t think this is a novel observation, but I can’t remember seeing it explicitly stated in conversation recently.
I think the combination of 1 and 2 is such that you want the people who come through 1 to become people who are talented and noted down as 2. We should be empowering one another to be more ambitious. I don’t think I would have gotten my emergent ventures grant without EA.
(Pretty confident about the choice, but finding it hard to explain the rationale)
I have started using “member of the EA community” vs “EAs” when I write publicly.
Previously I cared a lot less about using these terms interchangeabley, mainly because referring to myself as an EA didn’t seem inaccurate, it’s quicker and I don’t really see it as tying my identity closely to EA, but over time have changed my mind for a few reasons:
Many people I would consider “EA” in the sense that they work on high impact causes, socially engage with other community members etc. don’t consider themselves EA, might I think would likely consider themselves community members. I wonder if they read things about what “EAs” should do and don’t think it applies to them.
Using the term “an EA” contributes to the sense that there is one (monolithic?) identity that’s very core to a person’s being. E.g. if you leave the community do you lose a core part of your identity?
Finally it also helps be specific about the correct reference class. E.g consider terms like “core EAs” with “leaders of EA-aligned organisations” or “decision makers at leading EA meta organisations” or “thought leaders of the EA community”. (there is also a class for people who don’t directly wield power but have influence over decision makers, I’m not sure what a good phrase to describe this role is).
Many people I would consider “EA” in the sense that they work on high impact causes, socially engage with other community members etc. don’t consider themselves EA, might I think would likely consider themselves community members
This is reasonable, but I think the opposite applies as well. i.e. people can be EA (committed to the philosophy, taking EA actions) but not a member of the community. Personally, this seems a little more natural than the reverse, but YMMV (I have never really felt the intuitive appeal of believing in EA and engaging in EA activities but not describing oneself as “an EA”).
There are people who I would consider “EA” who I wouldn’t consider a “community member” (e.g. if they were not engaging much with other people in the community professionally or socially), but I’d be surprised if they label themselves “EA” (maybe they want to keep their identity small, or don’t like being associated with the EA community).
I think there’s actually one class of people I’ve forgotten—which is “EA professionals”—someone who might professionally collaborate or even work at an EA-aligned organization, but doesn’t see themselves as part of the community. So they would treat an EAG as a purely professional conference (vs. a community event).
There are people who I would consider “EA” who I wouldn’t consider a “community member” (e.g. if they were not engaging much with other people in the community professionally or socially), but I’d be surprised if they label themselves “EA” (maybe they want to keep their identity small, or don’t like being associated with the EA community).
Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don’t interact with EAs outside professional contexts. So I would say “I am an EA”, but would be less inclined to say “I am a member of the EA community” except insofar as this just means believes in EA/does EA work.
“People in EA” (not much better, but hits the amorphous group of “community members plus other people who engage in some way” without claiming that they’d all use a particular label)
“People practicing EA” (for people who are actually taking clear actions)
“Community members”
“People” (for example, I think that posts like “things EAs [should/shouldn’t] do” are better as “things people [should/shouldn’t] do” — we aren’t some different species, we are just people with feelings and goals)
Note: This is collected from a number of people in an EA facebook group that I found in my google drive. I figured it was worth posting up as a shortform in case others find it valuable.
Tips
Delegation to those with more time/suitability
Don’t have un-prioritised tasks on your to-do list. Put them somewhere else out of sight—if they’re a good idea in a ‘some day’ pile, if they’re not very important bin (or delegate)
I keep my actual “To Do” list v. small these days and don’t agree to do anything outside of it, but I do put a lot of stuff on a list I call “Ideas” (which is still roughly prioritised). So then I have achievable goals and anything else feels like a bonus.
On saying no but not wanting to offend: I think that saying I’m too busy for what they’ve requested and offering something much smaller instead has helped me. Seems to be a good deal in terms of time saved and how offended they are. (This was when I gave up on my plan of just caring much less about upsetting people...too difficult!)
For regular things, I see two angles to work on: saying no more before commiting, and letting things you’ve got on already go/be less sticky.
For saying no: I actually had a really bad case of this with getting excited about something in the moment and then committing/investing in it and regretting later. It was really bad, and after one regretful experience I made myself a form which I had to fill out before I could take on any new project which asked me various questions about the project and how it connected to my long-term (year-scale) goals. I can forward you this form if you like. I only used it a few times but I think it helped. Curiously just process of making the form helped _a lot_ in hindsight (some self-signalling thing probably, not sure if it works as well if you’re conscious of it, quite possibly).
For letting go… what makes it hard to let go of things for you? Here is a list of things that make it hard for me and some possible solutions:
- I want the thing to happen and I’m scared it won’t if I pull out (delegate, communicate and encourage others to take more ownership)
- I’m scared the chance will be gone later if I don’t take it now (journal about it, is it really true, is it really that bad if I miss it, can I achieve the same goal another way)
- That I’ll let someone down, or a feeling that that’s the case (check with the person)
For one-off things:
Is the sources of these often the same or different? If the same, maybe talking to the source of these so they can help you filter would be good. If different, it sounds like you’re in a role where that happens by nature of your role—is that the role you want to be in? Is there a way to triage/filter a bit first (again maybe someone else can do that?).
As time goes on you might want to move the bar for how good an opportunity should be before you take it on. For example, maybe you’re happy to give talks for good causes but now you’re getting more requests. Assuming you don’t want to go above a certain amount of giving talks, you’ll have to say no to good causes in order to say yes to great ones, or otherwise you’ll end up eating more time than you wanted to on giving talks. You can also pass on the good but not great ones to others (although this also takes some time).
Rise by Patty Azzarello. The purpose of the book is to talk about career planning and how to be a good manager, but to do that it also talks about HOW to work in Part 1. Many of the things she wrote about resonated with me because I have tried to just push through an unrealistic workload instead of thinking more strategically. It’s also written in a nice and short way without too much fluff around it. And I found the advice to be actionable. Not just “Prioritise better!” but instead: “Here is the process of figuring out your ruthless priorities”
Thank you so much for this—I found it surprisingly comprehensive given its brevity. I especially appreciate you outlining the various ways in which you address the motivations behind something being hard to let go, which feel more concrete than some advice I’ve come across.
I made myself a form which I had to fill out before I could take on any new project which asked me various questions about the project and how it connected to my long-term (year-scale) goals.
I would be really interested in taking a peek at the form. : ) Delegating more is something I’m working on and I feel like I’m slowly becoming better at it, but clearly still not good enough since I continue to burn out.
Hey Miranda! Actually this was a collection of other people’s (pretty cool) responses so sadly I don’t have the form :(. Agreed that delegating is hard!
let me see if I can ask the original commentor—I definitely think it would be valuable!
Curious if people have tried to estimate the cost of burnout.
The things I think we should care about, but without numbers or estimates:
How much directly the burnout reduce productivity?
E.g. 6-12 months on average, the longer the grind the longer the burnout
The long-term reduction in total max capacity over time or something
lets say you had 60K hours before burnout, after you have like 40K because you just can’t work as hard.
How much does burnout increase the likelihood the person doesn’t puruse high impact career (i.e. leaves direct work roles)
What effect does burnout of a person have on their EA network (e.g. their colleagues, friends etc.?)
E.g. if they’re on a team, it could marginally increase the chance other team members burn out because they now have more work (+ creating a negative associations to work)
e.g. their friends & local community might have a more negative view of the community as one where your friends burnout
EA is risk-constrained (2020)by Edo Arad. The post makes the claim that EAs in general are risk-averse on an individual level, which can restrict movement-level impact.
The career coordination problem (2019) by Mathias Kirk Blonde. Short account of how the EA operations bottleneck was managed and a suggestion to expand career coaching capacity.
The Case for the EA Hotel by Halffull. Kind of a summary of the above constraints, explaining how the EA hotel could fill the need for the lack of mobility in the middle (what OP calls a “chasm”), trying to explain the vetting and talent constraints in the EA community. The first part is especially useful for outlining this underlying model.
Which community building projects get funded? By AnonymousEAForumAccount. It raises an important question, but I (Vaidehi) think the analysis misses the important questions. I’ve built off the original spreadsheet with categories here.
Most of these are logistical / operational things on how I can improve my own experience at EAG
Too much talking / needing to talk too loudly
Carry an info card / lanyard which has blurbs on my role, organisation & the projects I want to talk to people about and ask them to read it before we start our 1-1. (This is probably a little suboptimal if you need to actively get people super pumped about your ideas)
More walking conversations in quiet areas. This year the EAG had a wonderful second location with an indoor conservatory that was peaceful, quiet and beautiful. Everyone I brought there really liked it because it was a refreshing break from the conference. If there isn’t something like this in future EAGs I’ll try to find a good ~25 minute walking route on the first day of the conference.
Shop talk vs socializing during 1-1s
I feel quite conflicted on this
One one hand, it makes sense especially when doing more open-ended networking (e.g. this person does climate stuff, I feel it’s useful to meet them but not sure how exactly it would be useful). Hopefully, the info card saves some time.Shop talk is absolutely exhausting and I feel that I’m not at full capacity for at least 50% of EAG.
It’s sometimes hard to know which people you should meet in person if there are say 20 all equally good people but you only have time to meet 5 in person. Don’t know if there’s a good decision-making heuristic there
If I had to guess, it’s more important to meet people less connected to the community / at the first EAG
And sometimes more engaged people you haven’t met just to establish an in-person connection
Maybe 5-10 minute connections can just be a good thing to do to create a touchpoint with people
I would basically have all conversations be socializing, and then reach out to the shop talk people after EAG to set longer and more intentional discussions that are probably going to be more productive.
Make new EAs feel more comfortable
I was very impressed with the EAG newbies I met—they were about 10x more organized and proactive than I was at my first EAG.
But because they were doing all this AND it was their first EAG, they were also (very!) nervous / anxious. While I tried to help them feel more at ease, I don’t think it helped much.
I feel like this is probably a very difficult issue to make traction on individually, but I want to reflect more on this.
An official conference length of 2 days isn’t enough time.
I don’t think anything can be done on this front, but thought I’d mention it anyways
Ideally I’d like to have 2 days of conference and 2-3 days before/after for more relaxed socializing
Basically, these are ways of spreading EA ideas, philosophies or furthering concrete EA goals in ways that are different from the typical community building models that local groups use.
Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having.
Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that.
…
My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, “Oh, I see how effective altruists have tools for answering questions. I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach.”
How valuable is building a high-quality (for-profit) event app for future EA conferences?
There are 6 eag(x) conferences a year. this number will probably increase over time and more conferences will come up as EA grows- I’d expect somewhere between 80-200 EA-related conferences and related events in the next 10 years. This includes cause-area specific conferences, like Catalyst and other large events.
A typical 2.5 day conference with on average ~300 attendees spending 30 hours = 9,000 man-hours would be a range of 720,000-1,800,000 man hours over 10 years. Of this time, I’d expect 90% to be taken up doing meetings, attending events, eating etc. Of the remaining 10%, so 7,200-18,000 saving 1% of this time is in the range of 7,200- 18,000 hours or roughly seems pretty useful!
For reference, 1 year of work (a 40 hours work-week for 50 weeks) = 2000 hours.
Pricing estimate if we pay for an event conferencing app: Swapcard, recently used by CEA for EAGx events costs approximately USD$7 per user.
Using my previous estimate, the total cost over 10 years would be between USD $168,000-420,000 without any discounting. Discounting 50% for technology becoming cheaper, and charity discounts, we could conservatively say $84,000-$210,000 total cost.
Not sure what to do with this information, or how to compute the value of this money saved (assuming our benevolent EA ally / app creator gives us access for a heavily discounted price, otherwise the savings are not that important).
Given the pandemic, I would actually upgrade the potential cost effectiveness of this, because we can now add Student Summits and EAGxVirtuals as potentially regular events, bringing the total in a non-COVID year to up to 8 events.
Hm I think Swapcard is good enough for now, and I like it more than the Grip app. I think this comes down to what specific features people want in the conference app and why this would make things easier or better.
Of course it would be good to centralize platforms in the future (i.e. maybe the EA Hub also becomes a Conference platform), but I don’t see that being a particularly good use of time.
+1 the math there. How does building an app compare to throwing more resources at finding better pre-existing apps?
I’ll just add I find it kind of annoying how the event app keeps getting switched up. I thought Grip was better than whatever was used recently for EAGxAsia_Pacific (Catalyst?).
I think CEA has looked at a number of apps—it wold definitely be worth checking with them to see how many apps they’ve considered out of the total number of apps available, and possibly follow the 37% rule.
It seems plausible, though overall not that likely, to me that maybe the LessWrong team should just build our own conference platform into the forum. We might look into that next year as we are also looking to maybe organize some conferences.
That would be interesting! I’d be interested to see if that happens—I think there are probably a benefits from integration with the LW/EA Forum. In what scenario do you think this would be the most likely?
I think it’s most likely if the LessWrong team decides to run a conference, and then after looking into alternatives for a bit, decides that it’s best to just build our own thing.
I think it’s much more likely if LW runs a conference than if CEA runs another conference, not because I would want to prioritize a LW conference app over an EAG app, but because I expect the first version of it to be pretty janky, and I wouldn’t want to inflict that on the poor CEA team without being the people who built it directly and know in which ways it might break.
Quick BOTEC of person-hours spent on EA Job Applications per annum.
I created a Guesstimate model to estimate a total of ~14,000 to 100,000 person-hours or ~7 to 51 FTE are spent per year (90% CI). This comes to an estimated USD $ 320,000 to $3,200,000 unpaid labour time.
All assumptions for my calculations are in the Guesstimate
The distribution of effort spent by candidates is heavy-tailed; a small percentage of candidates may spend 3 to 10x more time than the median candidate.
I am not very good at interpreting the guesstimate, so if someone can state this better / more accurately than would be helpful
Keen to get feedback on whether I’ve over/underestimated any variables.
I’d expect this to grow at a rate of ~5-10% per year at least.
Sources: My own experience as a recruiter, applying to EA jobs and interviewing staff at some EA orgs.
Edited the unpaid labour time to reflect Linch’s suggestions.
I’ll adjust the estimate a bit higher.
In the Guesstimate I do discount the hours to say that 75% of the total hours are unpaid (trial week hours cone to 5% of the total hours).
I did not review the model, but only 75% of hours being unpaid seems much too low based on my experience having gone through the job hiring process (including later stages) with 10-15 EA orgs.
Okay, so I used a different method to estimate the total manhours and my new estimate is something like 60%. I basically assumed that 50% of Round 2 −4 in the application process is paid, and 100% of the work trial.
I expect that established / longtermist orgs are disproportionately likely to pay for work tests, compared to new or animal / GH&D orgs.
I think Josh was claiming that 75% was “too low”, as in the total % of unpaid hours being more like 90% or something.
When I applied to a bunch of jobs, I was paid for ~30 of the ~80 hours I spent (not counting a long CEA work trial — if you include that, it’s more like 80 out of 130 hours). If you average Josh and I, maybe you get back to an average of 75%?
*****
This isn’t part of your calculation, but I wonder what fraction of unique applicants to EA jobs have any connection to the EA community beyond applying for one job?
In my experience trying to hire for one role with ~200 applicants, ~1/3 of them neither had any connection to EA in their resumes nor provided further information in their applications about what drew them to EA. This doesn’t mean there wasn’t some connection, but a lot of people just seemed to be looking for any job they could find. (The role was more generic than some and required no prior EA experience, so maybe drew a higher fraction of outside applicants.)
Someone having no other connection to the EA community doesn’t mean we should ignore the value of their time, and the people who apply to the most jobs are likely to have the strongest connections, so this factor may not be too important, but it could bear consideration for a more in-depth analysis.
Experimenting to see what kind of feedback this gets and whether it’s useful to share very early stage thoughts publicly. If anyone is interested or already exploring this topic, feel free to reach out, I have written up a (slightly more indepth) proposal I can share.
Problem: There might be many people in EA that could benefit from career coaching.
Size: I estimate ~300 to 1000 (60% CI) people might be underconfident or less ambitious than they might be. 80K frequently mentions underconfidence.
These are people with basic intro knowledge but are unable to make career progress due to miscalibration, lack of knowledge, negative feelings associated with networking / applying etc.
Tractability: Career coaches are very common & help people become confident and land dream jobs / improve their current career situation.
Neglectedness: Seems unlikely to me that existing coaches cover this need. I am also fairly confident that existing groups / CBs do not cover this need.
Proposal: An EA career coach who’s:
Improving client’s calibration on good fit positions (both EA & non-EA)
Giving practical advice & guidance (e.g. resumes, interviews, long-term career planning)
Providing encouragement & boost self-confidence
Helping clients maintain a healthy relationship to the job hunt / reduce stress, e.g. provide unique insights into EA job landscape
Key uncertainties: (* = most uncertain)
Is there demand for this idea and how big?*
Is that demand already being met?
Is someone working on this going to significantly improve the chances that an individual makes positive progress in their career path? *
Are there people or groups able and / or willing to provide this service?*
(subquestion if people are willing but not able: Are the skills easily trainable?)
Would people reach out for such a service (and be willing to pay) on their own?*
Next steps: (5-20 hrs)
Talk to people (have emailed some EA coaches)
Run experiments to gauge demand e.g. interest form for online career events (e.g. resume writing workshop) or starting a peer-to-peer career support group
I plan to keep adding and refining this list over time, I’ve just put my current list here in case anyone else has ideas.
Movement Building: Any work to build a movement including but not limited to: community infrastructure, community building, cause, field or discipline development and outreach. Movement building does not need to involve using or spreading the EA brand.
Movement building is a subset of “Meta EA” or “meta altruism”.
Types of Movement Building:
Community Infrastructure: The development of community-wide products and services that help develop the community. Online examples include building wikis, forums, tools and websites. Offline examples include conferences, community houses, and regional networks.
Note: Some community infrastructure may be limited to certain subgroups within the community, such as events and services for leaders or affiliated organisations. Such events might still provide benefits to the wider community, especially when they improve coordination and communication, and where relevant should be considered as infrastructure.
Community Building: Influencing individuals to take actions based on the ideas and principles of the EA movement.This is often accomplished through the development of groups (local & online) organised by geography, shared interests, careers, causes and more. Local groups are the most common, but certain locations (e.g. “hub” cities like London) may also have subgroups that based cause or career.
Field or Discipline Development: Developing new or influencing existing academic disciplines or fields through the creation of new organisations, advocacy or funding academics to work in this field. Closely related to Professionalization.
Professionalization: Giving an occupation, activity, or group professional qualities. This can be done by creating a career out of, increasing the status of, raising the qualifications required for or improving the training given for a occupation, activity or group.
Other terms
CEA’s Funnel Model: A model of movement building which focuses on the different stages of involvement people have with EA, based off of corporate sales funnel models.
Community: A group of people connected by a set of shared values, norms, or beliefs.
Alternative definition: “A community is a group of humans with a shared identity who care for each other.”—Konrad
Ideology: A set of ideas and beliefs that represent a particular worldview.
Network: A group of people with varying degrees of connection to each other.
Organic Movement Growth: Movement growth that occurs organically, without explicit intentions (other than perhaps very broad actions like mass-targeted publications).
Social Movement: A group of people working to achieve a goal or set of goals through collective action. Differentiated from an intellectual movement because of the specification and emphasis on concrete actions.
Status: This was a post i’d drafted 4 years ago on climate change in EA. Not sure I stand by all of it, but I thought it might be worth sharing.
let’s make room for climate change
What this post is NOT saying:
* depriortise other cause areas * redirect significant resources fromt other cause areas * the average EA should go into climate change or become a climate scientist/advocate * the average EA has comparative advantage in climate change
What this post IS saying
* having an EA org or projects related to climate change will be beneficial to EA * certain climate solutions will be more tractable * climate change as a cause area is much less uncertain than other cause areas, and interventions are also less uncertain * get funding for this through funds outside of or adjacent to EA. funds which, counterfactually, would not have gone elsewhere * treat climate change seriously as a GCR/x-risk multiplier * show that EA has done its homework on climate change (whatever the results of that homework may be) * attract people who are experts in the field to work on these issues (not redirect EA talent towards climate change)
Summary
This post is calling for the EA movement to become more climate change friendly. That is, to create space for potentially new orgs or EA-aligned projects working on climate change. This does not mean redirecting major resoures away, but rathre facilitating the inclusion of climate change in order to redirect non-EA resoures into EA. In the best case scenario, this helps broader norm change in the nonprofit landscape towards funding more evidence-backed and rigorous nonprofits. Potentially, this could also get more funding into different cause areas as donors are exposed to EA ideas (low but not insignificant chance).
While the movement as a whole should focus on the best solution on the margin, there will likely never be enough jobs for everyone, so focusing on other cause areas will help multiply counterfactual impact. Instead of leaving the movement or working outside of it, individuals are still thinking in EA terms and it is possible that people will understand that.
The main reasons for this are because:
· The current likelihood of climate change is
· Climate Change is an x-risk multiplier. It increases the chances of almost all known x-risks. This needs to be modeled by someone better at modeling, but some plausible scenarios.
* Increased chance of civil wars * Increased chance of nuclear wars * Increased chance of biological disasters
· Climate change itself may not cause human extinction, but it could non-negligibly set humanity back by hundreds, if not thousands of years
* destablizing currrent institutions and states * in the event of a large part of the human popuulation being killed, we would lose cultural knowledge and the benefits of a fully globalized economy * it all depends on your time scale. if your time scale is millions of years * also depends on whether you care more about avoiding negative futures or more about 0 humans. difference betweeen centuries of low-quality vs 0 humans forever. low quality definitely seems worse, even if humanity eventually recovers, depending on the intensity of the bad lives post-apocalypse
· Climate Change is not funding constrained. A climate change EA org would be easily able to find funding elsewhere, thus not diverting EA funds and changing the current EA climate. It’s possible that just due to the sheer size of the funding landscape good climate change organizations may be orders of magnitude more effective than other causes. Redirecting those resource will be relatively easy.
· Climate Change is not talent constrained. Current EAs who don’t have experience in climate change will not need to switch career paths. Those who do can make us of their comparrative advantage. I predict if we signal an acceptance of climate change we will attract leading experts in the field due to EA’s track record and the lack of an effectiveness focused climate change org
· Many climate change efforts are exceptionally ineffective, because saying its not neglected doesnt mean its not neglected in the right places
· EA does not have a thorough understanding of climate change, we have based our understandings off of a handful of posts written by people with varying levels of familiarity with the cause. Ben’s post
· However there is a wealth of information and expertise which means that this is one of the few x-risk multipliers we actually have a good sense about
· We are more certain about the potential solutions for climate change than other cause areas, which means that direct work will be focused, quick and efficient.
· Climate change is analogous to EA because it’s
* scientific and evidence based * considers generations beyond our own
· Climate Change is an easy cause to do experiments in to practice leading projects, skill-building and so on because it is well-known and widespread. Even if it is less effective than other interventions, it might be easier to complete and execute such issues
· Climate change has synergies with other EA cause areas like animal advocacy. Both can strengthen the case for the other. Farmed animals are a significant soure of GHG emissions, and this number is projected to rise in developing nations
When asking about resources, a good practice might be to mention resources you’ve already come across and why those sources weren’t helpful (if you found any), so that people don’t need to recommend the most common resources multiple times.
Also, once we have an EA-relevant search engine, it would be useful to refer people to that even before they ask a question in case that question has been asked or that resource already exists.
The primary goal of both suggestions would be to make questions more specific, in-depth and hopefully either expanding movement knowledge or identifying gaps in knowledge. The secondary goal would be to save time!
Some ideas for improving or reducing the costs of failure transparency
This is an open question. The following list is intended to be a starting point for conversation. Where possible tried to make these examples as shovel-ready as possible. It would be great to hear more ideas, or examples of successfully implemented things.
Thanks to Abi Olvera, Nathan Young, Ben Millwood, Adam Gleave & Arjun Khandelwal for many of these suggestions.
Create a range of space(s) to discuss failure of any size.
I think the explicit intention of helping the community and providing relevant information is probably important to avoid goodharting.
Questions that could help you determine how valuable this mistake is to the wider community :
How generalizable was the failure?
(trying) to separate personal faults from external factors
What projects could you have done instead of this one?
Would you do the project again? (was it worth it)
Do you think your evaluation of the project is the same someone 1) working on you with it 2) a funder 3) recipients would agree with? How might they differ?
It would be especially valuable to have high-profile members of the EA community do this, since they have relatively less status to lose and their
Note that these spaces don’t have to all be public!
At EA conferences (this is more for signalling / setting norms)
A regular / semi-regular “Failed Projects” or “Things I changed my mind on” or “evolution in my thinking” kind of panels at EA Globals and other conferences
Asking EA public figures questions about failure at talks
At EA conferences or local groups: Events, workshops or meet-ups for people to share their thinking, changes in their thinking and mistakes and reflect on them together, collaboratively
Create committee(s) to evaluate failed projects
For larger projects with bigger stakes, it seems valuable to invest more resources into learning from it
Interviews with stakeholders & reading relevant documents & outputs
Aim to create a neutral, fair report which creates an accurate map of the problem
It seems plausible the EAIF would fund something like this
Pay grantees to follow-up on their projects
Could funders offer to pay an additional X dollars to grantees to get them to write up reflections or takeaways from their projects, successful or not? (This is probably more valuable for people being funded to work on very different kinds of projects, and who wouldn’t otherwise write them—e.g. not established organisations who’d spend time writing an annual report anyways)
Anonymous Mistake Reporting
Have a call for anonymous reports of failures that people might not want to report publicly (either their own or others)
Many colleges and universities have access via their libraries to a number of periodicals, papers, journals etc. But once you graduate, you lose access
With something like sci-hub we don’t really need access to many things on the academic side.
But it seems valuable for EAs to not be pay-walled for access to various journals or news outlets (e.g. Harvard Business Review or Wall Street Journal or something) if they want to do resaerch (if there’s a sci-hub for stuff like that that could also work!)
We could probably develop this in a way where there are natural barriers to using it (e.g. core group of people are past EAG attendees, new members must be invited by at least 1 “core” member).
I have no clue what the cost for something like this would be, but it could be pretty easy to figure out by speaking to a university librarian or two! (I imagine probably in the ~$10,000-$100,000 range per year?
How important is it to measure the medium term (5-50 years) impact of interventions?
I think that taking the medium-term impact into account is especially lacking in the meta space, since building out infrastructure is exactly the kind of project that could take several years to set up with little progress before gains are made.
I’d also be interested in how many /which organisations plan to measure their impact on this 5-50 year timescale. I think it would be very interesting to see the impact of various GH&D charities on a 5 or 10 year timescale.
The Local Career Advice Network recently completed a pilot workshop to help group organiers develop and implement robust career 1-1 strategies. During this process we compiled all existing EA careers advice & strategy, and found several open questions. This post provides an overview of the different kinds of careers research one could do. We will write more posts trying to explain the value of the different kinds of research.
Movement-level research
This research identifies bottlenecks in top causes and makes recommendations on specific actions individuals can take to address them.
Risks: The EA movement does not have as much impact as it could, unaddressed bottlenecks impede progress on certain causes, community members don’t know what the top options are an settle for something less impactful.
Non-EA examples: Studies predicting which jobs will get automated
Individual-level research
This research idenitifes best practices, framworks and tips on how to have a successful, fulfilling career. This research could help them find a career that is the right choice for them: that is aligned with their values, that they can excel at, and that they are motivated to stay in in the long-term.
Risks: Causing harm by reducing an individual’s impact in the long-term, or pursuing a path where they don’t have a good personal fit. They might be turned away from the EA movement.
This research identifies interventions that can help achieve both movement-level or individual-level advice. Interventions prioritise
Risks: All of the above if it doesn’t balance between the two.
EA Examples: Animal Advocacy Careers is preregistering a study of their career 1-1 calls, Literature review on what works to promote charitable donations
Non-EA Examples: Research on the effectiveness of coaching/mentoring.
I think movement-level advice is most useful for setting movement-level strategy, rather than informing individual actions because personal fit considerations are quite important. However, I think this has the consequence that some paths are much more clearly defined than others, making it difficult for people who don’t have those interests to define a path.
Reasons for/against Facebook & plans to migrate the community out of there
Epistemitc Status: My very rough thoughts. I am confident of the reasons for/against, but the last section is mostly speculation so I won’t attempt to clarify my certainty levels
Reasons for moving away from Facebook
Facebook promotes bad discussion norms (see Point 4 here)
Poor movement knowledge retention
Irritating to navigate: It’s easy to not be aware that certain groups exist (since there are dozens) and it’s annoying to filter through all the other stuff in Facebook to get to them
Reasons against
Extremely high switching costs
start-up costs (see Neels’ comment)
harder to pay attention to new platform
easier to integrate with existing scoial media
Offputting/intimidating to newer members
Past attempts haven’t taken off (e.g. the EA London Discussion Board, but that was also not promoted super hard)
Existing online space (the Forum) is a bit too formal/initimidating
How would we make the switch?
In order of increasing speculativeness
One subcommunity at a time. It seems like most EA groups are already more active in their spaces other than Facebook, but it would be interesting to see this replicated on the cause area level by understanding what the community members’ needs are and seeing if there’s a way to have alternatives.
Moving certain services found on Facebook to other sites: having a good opportunities board so people go to another place for ea jobs & volunteer opportunities, moving the editing & review group to the forum (?), making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new platform.
Encouraging ~100 active community members to not use Facebook for a week as an experiment and track the outcomes
Make the Forum less intimidating so people feel more comfortable posting (profile pictures? Heart reacts? Embedded discord server or other chat function? Permanent Walled Garden?)
Things I’ll be tracking that might update me towards how possible this is
LessWrong’s experience with the Walled Garden
The EA Hub is improving our Community Directory & introducing some other services in 2021 possibly including 1-1 Matching and an Opportunities Board.
Cause area Slacks
Effective Environmentalism Slack group (not very active right now, but we haven’t done a lot of active efforts to encourage people to use the Slack yet. Might do this later in the year).
IIDM & Progress Studies Slack
Changes in Forum culture over time
If there are any EA groups or subcommunities already moving away from Facebook, please let me know so I can track you :)
I want to emphasise this point, since I think it applies to both new and more experienced members. I personally find it quite high mental load to actively pay attention to communities on a new platform. Some of these are start-up costs (learning a new interface etc), but there are also ongoing costs of needing to check the new site, etc. And it is much easier to add something to an existing place I already check
I don’t think the Forum is likely to serve as a good “group discussion platform” at any point in the near future. This isn’t about culture so much as form; we don’t have Slack’s “infinite continuous thread about one topic” feature, which is also present on Facebook and Discord, and that seems like the natural form for an ongoing discussion to take. You can configure many bits of the Forum to feel more discussion-like (e.g. setting all the comment threads you see to be “newest first”), but it feels like a round peg/square hole situation.
On the other hand, Slack seems reasonable for this!
There is also a quite active EA Discord server, which serves the function of “endless group discussions” fairly well, so another Slack workspace might have negligible benefits.
Another possible reason against might be: In some countries there is a growing number of people who intentionally don’t use Facebook. Even if their reasons for their decision may be flawed, it might make recruiting more difficult. While I perceive this as quite common among German academics, Germany might also just be an outlier.
Moving certain services found on Facebook to other sites: [...], making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new platform.
I think the EA Hub is in a good position to grow and replace some of the functions that Facebook is currently being used for in the community.
Could regular small donations to Facebook Fundraisers increase donations from non-EAs?
The day before Giving Tuesday, I made a donation to a EA Facebook charity that had seen no donations in a few weeks. After I donated to about 3 other people donated within the next 2 hours (well before the Giving Tuesday start time). From what I remember, the total amount increased by more than the minimum amount and the individuals appeared not to be affiliated with EA, so it seems possible that this fundraiser might have somehow been raised to their attention. (Of course it’s possible that with Giving Tuesday approaching they would have donated anyway.)
However, it made think that regularly donating to fundraisers could keep them on people’s feeds inspire them to donate, and that this could be a pretty low-cost experiment to run. Since you can’t see amounts, you could donate the minimum amount on a regular basis (say every month or so—about $60 USD per year). The actual design of the experiment would be fairly straight forward as well: use the previous year as a baseline of activity for a group of EA organisations and then experiment with who donates, when they donate, and different donation amounts. If you want to get more in-depth you could also look at other factors of the individual who donates (i.e. how many FB friends they have).
Experimental design
Using EA Giving Tuesday’s had 28 charities that people could donate to. Of that, you could select 10 charities as your controls, and 10 similar charities (i.e. similar cause, intervention, size) as your experimental group, and recruit 5 volunteer donors per charity to donate once a month on a randomly selected day. They would make the donation without adding any explanation or endorsement.
Then you could use both the previous year’s data and the current year’s controlled charities to compare the effects. You would want to track whether non-volunteer donations or traffic was gained after the volunteer donations.
Caveats: This would be limited to countries where Facebook Fundraising is set up.
Reflections on making commiting to a specific career path
Imagine Alisha is making a decision whether to pursue job X or job Y. She is currently leaning in favor of job X 55% to 45%, so decides to pursue job X. Over the next couple years, Alisha gains knowledge and expertise as an Xer, and is passionate and excited by her job. She’s finding new opportunities and collaborations, and things are going well. But she often wonders if things would have gone even better if she went with job Y.
I believe that you get a lot more value from committing to one path / area and developing deep expertise & knowledge there, rather than hopping around for too long. There’s a lot of implicit knowledge you gain, and therefore a comparative advantage.
I think it’s hard to see the hidden uncertainties behind lots of (small and large) decisions when you make a decisive choice and ruthlessly prioritize. It’s easy to read more confidence into decisions than there is—partly because it’s just easier to process the world in black and white shades of grey.
And it can be really hard to live with those decisions, even once you’ve made them. I think you probably need (to some extent) to shut that part off for some time so you can actually double down and focus on one thing. I struggle with this a lot.
What I want to keep in mind, as a result of this:
check in with the people whose careers i’ve subconsciously modeled my plans off of and check how confident they were when they made the pivotal decisions (if they did make a pivotal decision at all) about how confident they were.
I should expect many people to be uncertain.
I expect many people didn’t have a master plan, but instead took advantage of interesting and good opportunities
I expect the best people are good at switching between exploring and exploiting systematically
I want to develop better ways of switching between explore & exploit, or not worrying that I’ll miss something and stay in the explore mode longer than I should
I want to introduce a periodic review to help feel better about exploiting (because I know I’ll have an opportunity to course correct)
I want to introduce periodic slack into my system to do exploration as needed
(H/T Steve Thompson for a good conversation that helped me crystalize some of this).
Project: More expert object-level observations & insights
Many subject matter experts with good object-level takes don’t post on the forum because they perceive the bar is too high
Other E.g. that I know personally : Founders of impactful charities don’t post regular updates on the progress their organizations are making, lessons they are learning, theory of change updates, how others can help etc.
People who aren’t naturally writers (e.g. they are doers and more on the ground building partnerships etc)
People who don’t realise they could add value to the community (because they are too busy to spend time in it and notice the biases or weakpoints)
Hire someone to interview these folks regularly (e.g. a couple eveyr month), find their cool insights, and write up their responses in an engaging way with some nice infographics & pictures from their org or something (e.g. not just interview style). I’ve discussed doing this with @Amber Dawn before with the folks in my networks who I know have things to say.
If someone wants to fund this, reach out?
Also: Have someone go around at EAGs interviewing people and then writing up all their conversations
This could be a fun project for folks who want to just meet interesting people and learn about what EAs are working on
Do active outreach to these people and offer them 1-1 calls to brainstorm topics, and 2-3 rounds of feedback if they do write something (maybe something for the Forum team?)
More involved things:
Change the culture of the EA Forum so people feel less scared to post on it e.g. by creating “First Drafts”
Make it easy for people to record & upload podcasts or voice notes to the EA forum with autotranscription and maybe taking out the “ums” and “buts” (very low confidence, this could be terrible)
What are the low-hanging fruit or outliers of EA community building?
(where community building is defined as growing the number of engaged EAs who are likely to take medium-to-large sized actions in accordance to the EA values and/or framework. it could include group activities, events, infrastructure building, resource)
the EA community talks a lot about low-hanging fruits and the outlier interventions that are 100x or 1000x better than the next best intervention
it seems plausible that either of these exist for community building
Low hanging fruits
from working in the community building space for the last 2+ years, i have found what i believe are many low-hanging fruit (which are decently impactful) but no extreme outliers that are orders of magnitude more impactful than the next best thing
I think low hanging fruits are relatively neglected areas of community building
The biggest one that I observed is careers advice outside of 80K’s general scope is very neglected, and within those there are mostly similar effectiveness interventions (or at least not 100-1000x apart).
What other low-hanging fruit do you think there are?
Extreme Outliers
i would guess that any outlier interventions could fall into 1 of two categories (which obviously don’t pose undue risk to the community):
Intervention that is moderately to very good at achieving X (where X can be either recruitment, education, engagement or retention, see more), but also have the property of scaling very quickly (e.g. a web service, written resource or a skill that could be taught to many group organisers )
Intervention is very good at recruiting a few extremely engaged, aligned & talented people (the hits based model, where you have 99% failure and 1% success), or getting them engaged (I imagine there’s fewer education or retention interventions)
Do you know of clearly obvious outlier interventions ?
I think introductory fellowships are extreme outlier interventions. EA Philippines’ 8-week Intro to EA Discussion Group (patterned after Stanford’s Arete fellowship) in May-July 2020 was by far our best activity yet. 31 signed up and 15 graduated, and out of the graduates, I believe we’ve created the following counterfactual impact:
One became the president of our student chapter EA Blue
Another became a core team member of EA Blue
Two have since taken the GWWC pledge
Three have become new volunteers (spending ~1-2 hrs/week) for EA Philippines (we actually got two more volunteers aside from these three, but those two I would say were not counterfactual ones)
Helped lead to a few career plan changes (I will write a separate impact report about EA PH’s 2020, and can talk about this more there).
EA Blue is now doing an Introductory Fellowship similar to ours with 26 participants, which I’m a facilitator for, and I think we’re having similarly good results!
I was going to post something for careers week but it was delayed for various reasons (including the mandatory last minute rewrite). I plan to post it in the next couple of weeks.
CGD launched a Global Skills Partnership program to reduce brain drain and improve migration (https://gsp.cgdev.org/)
It would be interesting to think about this from the perspective of EA groups, where brain drain is quite common. Part of their solution is to offer training and recognized certifications to a broader group of people in the home country to increase the overall pool of talent.
I will probably add more thoughts in the coming days when I have time to read the case studies in more depth.
My mistakes on the path to impact by Denise Melchin. Another highly upvoted post talking about the emphasis on working at EA organisations and direct EA work. There were 161 unique upvotes. Resonated comments (1,2,3,4,5)
Effective Altruism and Meaning in Lifeby extra_ordinary. A personal account of the talent gaps, and why the OP moved away from this because too much of their self-worth was associated with success in EA-related things. 4 comments in support of the post. Resonated comments (1,2). There were 55 unique upvotes.
Is anyone aware of/planning on doing any research related to the expected spike in interest for pandemic research due to COVID?
It would be interesting to see how much new interest is generated, and for which types of roles (e.g. doctors vs researchers). This could be useful to a) identify potential skilled biosecurity recruits b) find out what motivated them about COVID-19 c) figure out how neglected this will be in 5-10 years
I’d imagine doing a survey after the pandemic starts to die down might be more valuable than right now (maybe after the second wave) so that we’re tracking the longer-term impact rather than the immediate reactions.
An MVP version could be just looking at application rates across a variety of relevant fields.
Having done some research on post-graduate education in the past, it’s surprisingly difficult to access application rates for classes of programs. Some individual schools publish their application/admission rates, but usually as advertising, so there’s a fair bit of cherry picking. It’s somewhat more straightforward to access completion rates (at least in the US, universities report this to government). However, that MVP would still be interesting with just a few data points: if any EAs have relationships to a couple relevant programs (in say biosecurity, epidemiology), it may be worth reaching out directly in 6-12 months!
A more general point, which I’ve seen some discussion of here, is how near-miss catastrophes prepare society for a more severe version of the same catastrophe. This would be interesting to explore both theoretically (what’s the sweet spot for a near-miss to encourage further work, but not dissuade prevention policies) and empirically.
One historical example might be, for example, does a civilization which experienced a bad famine experience fewer famines in a period following that bad famine? How long is that period? In particular, that makes me think of MichaelA’s recently excellent Some history topics it might be very valuable to investigate.
Some thoughts on stage-wise development of moral circle
Status: Very rough, I mainly want to know if there’s already some research/thinking on this.
Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don’t know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
Anecdotally seems true from a number of EAs I’ve spoken to who’ve updated to longtermism over time.
For most people, changing one’s beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: “there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement.”
Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too “weird”
Further research on this topic that could verify my hypothesis:
Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
This is an interesting point I haven’t thought much about.
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
I think this is probably the strongest non-step-wise reason.
Suggestion for EA Forum posts: First Draft
Create a new type of post—a “First Draft” post, with it’s own section “WIP”. Basically like the current collaborative draft mode editing, but public.
This could be a expansion / continuation of the “amnesty day” posts, but more ongoing and more focused on changing the culture of the post.
Looks like a google doc with easy commenting on specific sections, maybe more voting options that have to do with feedback (e.g. needs more structure etc.)
You can give suggestions on what people can post e.g. “Idea bunny” “post outline” “unpolished draft” “polished draft” and give people options on the kinds of feedback they could seek e.g. “copyediting / grammar” or “tone” or “structure” or “factchecking” etc.
Maybe Karma-free, or separate karma score so people don’t worry about how it’ll be taken
Maybe people who give comments and feedback can get some kind of “helper karma” and be automatically tagged when the post is published and get credit of some kind for contributing to common knowledge
Potentially have it be gated in some way or have people opt-in to see it (e.g. so more engaged people opt-in, so it becomes like the Facebook peer-editing group), with regular pushes to get high karma / high engagement forum users (especially lurkers who like and read a lot) to join
Private by default (not searchable on web) but very clear that it’s not private private (e.g. since in practice people can always screenshot and share things anyways)
Feature interesting stories about where first drafts start and the posts they become to encourage usage
Get a bunch of high-status people / active forum folks to post their drafts to get the ball rolling
Hi Vaidehi!
You’ve written us quite the feature spec there. I’m not opposed to ambitious suggestions (at all! for real! though it is true that they’re less likely to happen), but I would find this one if it were written in the style of a user problem. I am un-embarassed to ask you for this extra product work because I know you’re a product manager. (That said, I’d understand if you didn’t want to spend any time on it without a stronger signal from us of how likely we are to act on it.)
Broader statement / use case I could imagine
All claims you could disagree with.
Many EAs primarily experience EA online (both initially and as they progress on their EA journeys).
There are limited opportunities for people to practice EA principles online
The forum is visited by many people
The forum should be a place where people can actively practice EA principles
Specifically, it can be a place collaborative truthseeking happens, but it isn’t really a place for that. Instead, it’s more often a place to share the results of collaborative truthseeking
Truthseeking involves:
Being wrong
Saying dumb / naïve things
Making mistakes
Appearing less intelligent than you are
Asking questions of people
Challenging people (of higher status / position than you)
Saying you were wrong publicly
Not getting defensive and being open to criticism
The forum doesn’t feel like a place where people can do those things today without some degree of reputational / career harm (or unless they invest a lot of time in legibly explaining themselves / demonstrating they’ve updated)
There are poor incentives for people to help each other collaboratively truth-seek on the Forum today. The forum can sometimes feels competitive or critical, rather than collaborative and supportive
I’ve commented previously during the era of Forum competition posts, that it would be nice to recognize people helping each other
Edo makes the nice comment that the strategy session is one of the few forum posting events that’s not explicitly competitive
Nathan proposes Community posts: The Forum needs a way to work in public which is somewhat similar in terms of changing incentives towards collaborative truthseeking
This is a current pratice users already (in the product question of are you replacing an existing practice or introducing a new one—it’s easier to replace a new practice because it’s more likely to be used).
1) There is already a strong culture of sharing your 101 google docs with a ever changing list of reviewers, commentors etc. I’m sure we’ve all seen at least 20-30 docs like these over time.
2) There are also some coordination attempts like the facebook group for editing & review
I think current solutions miss a lot of the value . I think the forum could do it better.
Better information flow & sharing
Less networking-walling (like paywalling but you’re limited by your network not ability to pay)
Lets more people see and give helpful comments
Lets more appropriate people give better comments (by incentivizing / rewarding help)
Explicitly build the forum as a place where people can feel comfortable exploring the world / being wrong together
Not OP but here are some “user problems” either I have or am pretty sure a bunch of people have:
Lots of latent, locked up insight/value in drafts
Implicitly high standards discourage posting these as normal posts, which is good for avg post quality and bad for total quality
Would want to collaborate on either an explicit idea or something tbd, but making this happen as is takes a bunch of effort
Reduces costs to getting and giving feedback
Currently afaik there’s no market where feedback buyers and sellers can meet—just ad hoc Google doc links
In principle you can imagine posts almost being written like a Wikipedia page: lots and lots of editors and commenters contributing a bit here and there
Here’s a post of mine that should be a public draft, for example. But as things stand I’d rather it be a shitty public post than a probably-perpetual private draft (if anyone wants to build on it, go to town!)
+1 to all of this also.
I’ve heard a new anecdote from someone who’s actively working on a AI research project who feels less connected to relevant people in their domain to get feedback on it.
(also would love to hear what doubts & hesitations you have / red-team this idea more—I think devil’s definitely in the details and there are lots of interesting MVP’s here)
hehe you know i like to give y’all some things to do! would be interested to know how likely you’d be to act on it, but also happy to expand since it’s not a big lift. Not linking to all the stuff I mention to save some time.
Here’s my hypothesis of the user problem :
Personal statement (my own case)
I often want to write a post, but struggle to get it over the line, and so it remains in a half baked state and not shared with people
I want to find the “early adopters” of my posts to give me lots of feedback when my personal circles may not have the right context and/or I don’t know who from my personal circles is available to look at a draft.
(there’s a big cognitive load / ugh field / aversion in general here e.g. whenever you have to go through a list of people to curate for e.g. inviting them to an event or asking people for favors.)
Sometimes it can be good to release things into the world even if they are low quality because then they’re out of your system and you can focus on other, more valuable ideas.
Personal Experiment: I’ve posted a draft on twitter and get some of this effect (where people not in my radar read and engage with stuff). This is mostly really good.
But, as might be obvious, it’s really not a good forum for sharing thoughts longer than 2 paragrapsh.
Sometimes I don’t know what posts or ideas will resonate with people, and it’s nice to find that out early. Also, I am better able to take feedback when I haven’t invested a bunch of time in polishing and editing a draft
I also just want to share a lot of thoughts that I don’t think are post-level quality but are also a bit more thought through than shortforms (without losing the benefit of in-line commenting, suggest mode—essentially, the UX of google docs)
Sadly I’ve been informed this is a pathological case for the pricing model of our collaborative editor SaaS tool.
:(
I found this post by Rob Bensinger of anonymous comments on EA from 2017 , with the question prompt:
Many still resonate today. I recommend reading the whole list, but there are a lot—so I’ve chosen a few highlights and comment exchanges I thought were particularly interesting. I’ve shortened a few for brevity (indicated by ellipses).
I don’t agree with many of these comments, but it’s interesting to see how people perceived things back then.
Highlights
On supporting community members
Related: Should the EA community be cause-first or member-first?
#28 - on people dismissing those who start as “ineffective altruists” (top voted comment with 23 karma)
#40 - the community should be better at supporting its members
#22 - on a lack of mentorship and guidance
Reply from Daniel Eth:
On conformism / dogmatism
#1 - on conformism & CFAR commiting to AI risk
#27
#39 - on EA having picked all the low-hanging fruit
#32 - on the limtiations of single orgs fixing problems
#32b) - on EA losing existing and failing to gain new high-value people
On the Bay Area community
#18 - on improving status in the Bay Area community so people feel less insecure
#8 - move EA to somewhere that’s not the Bay Area
Related: Say “nay!” to the Bay (as the default)!
#34 - EA is unsophisticated regarding policy
See:
On newcomers
#3 - talking about MIRI to newcomers makes you seem biased
#31 - make EA more welcoming to newcomers
#5 - lack of good advice to newcomers beyond donate & advocate
#23 - on bait & switch, EA as principles, getting an elite team at meta orgs like CFAR / CEA
Rob Bensinger replied:
Nick Tarleton replied:
Anonymous #23 replied:
Julia Wise of CEA replied:
This meme about ‘being the ones who show up’ is not something I’d heard before, but it explains a lot.
Reflecting on the question of CEA’s mandate, I think it’s challenging that CEA has always tried to be both, and this has not worked out well.
1) a community org
2) a talent recruitment org
When you’re 1) you need to think about the individual’s journey in the movement. You invest in things like community health and universal groups support. It’s important to have strong lines of communication and accountability to the community members you serve. You think about the individual’s journey and how to help addres those issues. (Think your local Y, community center or church)
When you’re 2) you care about finding and supporting only the top talent (and by extension actors that aid you in this mission). You care about having a healthy funnel of individuals who are at the top of their game. You care about fostering an environment that is attractive (potentially elite), prestigious and high status. (Think Y-Combinator, Fullbright or Emergent Ventures Fellows).
I think these goals are often overlapping and self-reinforcing, but also at odds with each other.
It is really hard to thread that needle well—it requires a lot of nuanced, high-fidelity communication—which in turn requires a lot of capacity (something historically short-of-stock in this movement).
I don’t think this is a novel observation, but I can’t remember seeing it explicitly stated in conversation recently.
This has been discussed regarding intro fellowships:
I think the combination of 1 and 2 is such that you want the people who come through 1 to become people who are talented and noted down as 2. We should be empowering one another to be more ambitious. I don’t think I would have gotten my emergent ventures grant without EA.
(Pretty confident about the choice, but finding it hard to explain the rationale)
I have started using “member of the EA community” vs “EAs” when I write publicly.
Previously I cared a lot less about using these terms interchangeabley, mainly because referring to myself as an EA didn’t seem inaccurate, it’s quicker and I don’t really see it as tying my identity closely to EA, but over time have changed my mind for a few reasons:
Many people I would consider “EA” in the sense that they work on high impact causes, socially engage with other community members etc. don’t consider themselves EA, might I think would likely consider themselves community members. I wonder if they read things about what “EAs” should do and don’t think it applies to them.
Using the term “an EA” contributes to the sense that there is one (monolithic?) identity that’s very core to a person’s being. E.g. if you leave the community do you lose a core part of your identity?
Finally it also helps be specific about the correct reference class. E.g consider terms like “core EAs” with “leaders of EA-aligned organisations” or “decision makers at leading EA meta organisations” or “thought leaders of the EA community”. (there is also a class for people who don’t directly wield power but have influence over decision makers, I’m not sure what a good phrase to describe this role is).
Interested in thoughts!
I started defaulting to saying people trying to do EA—less person focused more action focused
This is reasonable, but I think the opposite applies as well. i.e. people can be EA (committed to the philosophy, taking EA actions) but not a member of the community. Personally, this seems a little more natural than the reverse, but YMMV (I have never really felt the intuitive appeal of believing in EA and engaging in EA activities but not describing oneself as “an EA”).
There are people who I would consider “EA” who I wouldn’t consider a “community member” (e.g. if they were not engaging much with other people in the community professionally or socially), but I’d be surprised if they label themselves “EA” (maybe they want to keep their identity small, or don’t like being associated with the EA community).
I think there’s actually one class of people I’ve forgotten—which is “EA professionals”—someone who might professionally collaborate or even work at an EA-aligned organization, but doesn’t see themselves as part of the community. So they would treat an EAG as a purely professional conference (vs. a community event).
Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don’t interact with EAs outside professional contexts. So I would say “I am an EA”, but would be less inclined to say “I am a member of the EA community” except insofar as this just means believes in EA/does EA work.
I also try not to use “EA” as a noun. Alternatives I’ve used in different places:
“People in EA” (not much better, but hits the amorphous group of “community members plus other people who engage in some way” without claiming that they’d all use a particular label)
“People practicing EA” (for people who are actually taking clear actions)
“Community members”
“People” (for example, I think that posts like “things EAs [should/shouldn’t] do” are better as “things people [should/shouldn’t] do” — we aren’t some different species, we are just people with feelings and goals)
Resources for feeling good about prioritising
Note: This is collected from a number of people in an EA facebook group that I found in my google drive. I figured it was worth posting up as a shortform in case others find it valuable.
Tips
Delegation to those with more time/suitability
Don’t have un-prioritised tasks on your to-do list. Put them somewhere else out of sight—if they’re a good idea in a ‘some day’ pile, if they’re not very important bin (or delegate)
I keep my actual “To Do” list v. small these days and don’t agree to do anything outside of it, but I do put a lot of stuff on a list I call “Ideas” (which is still roughly prioritised). So then I have achievable goals and anything else feels like a bonus.
On saying no but not wanting to offend: I think that saying I’m too busy for what they’ve requested and offering something much smaller instead has helped me. Seems to be a good deal in terms of time saved and how offended they are. (This was when I gave up on my plan of just caring much less about upsetting people...too difficult!)
For regular things, I see two angles to work on: saying no more before commiting, and letting things you’ve got on already go/be less sticky.
For saying no: I actually had a really bad case of this with getting excited about something in the moment and then committing/investing in it and regretting later. It was really bad, and after one regretful experience I made myself a form which I had to fill out before I could take on any new project which asked me various questions about the project and how it connected to my long-term (year-scale) goals. I can forward you this form if you like. I only used it a few times but I think it helped. Curiously just process of making the form helped _a lot_ in hindsight (some self-signalling thing probably, not sure if it works as well if you’re conscious of it, quite possibly).
For letting go… what makes it hard to let go of things for you? Here is a list of things that make it hard for me and some possible solutions:
- I want the thing to happen and I’m scared it won’t if I pull out (delegate, communicate and encourage others to take more ownership)
- I’m scared the chance will be gone later if I don’t take it now (journal about it, is it really true, is it really that bad if I miss it, can I achieve the same goal another way)
- That I’ll let someone down, or a feeling that that’s the case (check with the person)
For one-off things:
Is the sources of these often the same or different? If the same, maybe talking to the source of these so they can help you filter would be good. If different, it sounds like you’re in a role where that happens by nature of your role—is that the role you want to be in? Is there a way to triage/filter a bit first (again maybe someone else can do that?).
As time goes on you might want to move the bar for how good an opportunity should be before you take it on. For example, maybe you’re happy to give talks for good causes but now you’re getting more requests. Assuming you don’t want to go above a certain amount of giving talks, you’ll have to say no to good causes in order to say yes to great ones, or otherwise you’ll end up eating more time than you wanted to on giving talks. You can also pass on the good but not great ones to others (although this also takes some time).
Books
“When I Say No, I Feel Guilty: How to Cope, Using the Skills of Systematic Assertive Therapy.”
Sustainable motivation | Helen Toner | EA Global: San Francisco 2019
Essentialism: The disciplined pursuit of less. (Short Summary)
A very short blogpost of the author about declining without harming the relationship
Rise by Patty Azzarello. The purpose of the book is to talk about career planning and how to be a good manager, but to do that it also talks about HOW to work in Part 1. Many of the things she wrote about resonated with me because I have tried to just push through an unrealistic workload instead of thinking more strategically. It’s also written in a nice and short way without too much fluff around it. And I found the advice to be actionable. Not just “Prioritise better!” but instead: “Here is the process of figuring out your ruthless priorities”
Article: Learn When to Say No and podcast: The Subtle Art of Saying No
Thank you so much for this—I found it surprisingly comprehensive given its brevity. I especially appreciate you outlining the various ways in which you address the motivations behind something being hard to let go, which feel more concrete than some advice I’ve come across.
I would be really interested in taking a peek at the form. : ) Delegating more is something I’m working on and I feel like I’m slowly becoming better at it, but clearly still not good enough since I continue to burn out.
Hey Miranda! Actually this was a collection of other people’s (pretty cool) responses so sadly I don’t have the form :(. Agreed that delegating is hard!
let me see if I can ask the original commentor—I definitely think it would be valuable!
oh my bad, I must’ve misread. Thank you!
Curious if people have tried to estimate the cost of burnout.
The things I think we should care about, but without numbers or estimates:
How much directly the burnout reduce productivity?
E.g. 6-12 months on average, the longer the grind the longer the burnout
The long-term reduction in total max capacity over time or something
lets say you had 60K hours before burnout, after you have like 40K because you just can’t work as hard.
How much does burnout increase the likelihood the person doesn’t puruse high impact career (i.e. leaves direct work roles)
What effect does burnout of a person have on their EA network (e.g. their colleagues, friends etc.?)
E.g. if they’re on a team, it could marginally increase the chance other team members burn out because they now have more work (+ creating a negative associations to work)
e.g. their friends & local community might have a more negative view of the community as one where your friends burnout
others?
Collection of Constraints in EA
Dealing with Network Constraints (My Model of EA Careers) (2019) by Ray Arnold.
EA is vetting-constrained (2019) by Toon Alfrink
EA is risk-constrained (2020) by Edo Arad. The post makes the claim that EAs in general are risk-averse on an individual level, which can restrict movement-level impact.
The career coordination problem (2019) by Mathias Kirk Blonde. Short account of how the EA operations bottleneck was managed and a suggestion to expand career coaching capacity.
EA is talent constrained in specific skills (2018) by 80,000 Hours.
Improving the EA Network (2016) by Kerry Vaughn. Discusses coordination constraints and makes the case for working on network improvement.
Related posts:
The Case for the EA Hotel by Halffull. Kind of a summary of the above constraints, explaining how the EA hotel could fill the need for the lack of mobility in the middle (what OP calls a “chasm”), trying to explain the vetting and talent constraints in the EA community. The first part is especially useful for outlining this underlying model.
Which community building projects get funded? By AnonymousEAForumAccount. It raises an important question, but I (Vaidehi) think the analysis misses the important questions. I’ve built off the original spreadsheet with categories here.
(very) Quick Reflections on EAG 2021
Most of these are logistical / operational things on how I can improve my own experience at EAG
Too much talking / needing to talk too loudly
Carry an info card / lanyard which has blurbs on my role, organisation & the projects I want to talk to people about and ask them to read it before we start our 1-1. (This is probably a little suboptimal if you need to actively get people super pumped about your ideas)
More walking conversations in quiet areas. This year the EAG had a wonderful second location with an indoor conservatory that was peaceful, quiet and beautiful. Everyone I brought there really liked it because it was a refreshing break from the conference. If there isn’t something like this in future EAGs I’ll try to find a good ~25 minute walking route on the first day of the conference.
Shop talk vs socializing during 1-1s
I feel quite conflicted on this
One one hand, it makes sense especially when doing more open-ended networking (e.g. this person does climate stuff, I feel it’s useful to meet them but not sure how exactly it would be useful). Hopefully, the info card saves some time.Shop talk is absolutely exhausting and I feel that I’m not at full capacity for at least 50% of EAG.
It’s sometimes hard to know which people you should meet in person if there are say 20 all equally good people but you only have time to meet 5 in person. Don’t know if there’s a good decision-making heuristic there
If I had to guess, it’s more important to meet people less connected to the community / at the first EAG
And sometimes more engaged people you haven’t met just to establish an in-person connection
Maybe 5-10 minute connections can just be a good thing to do to create a touchpoint with people
I would basically have all conversations be socializing, and then reach out to the shop talk people after EAG to set longer and more intentional discussions that are probably going to be more productive.
Make new EAs feel more comfortable
I was very impressed with the EAG newbies I met—they were about 10x more organized and proactive than I was at my first EAG.
But because they were doing all this AND it was their first EAG, they were also (very!) nervous / anxious. While I tried to help them feel more at ease, I don’t think it helped much.
I feel like this is probably a very difficult issue to make traction on individually, but I want to reflect more on this.
An official conference length of 2 days isn’t enough time.
I don’t think anything can be done on this front, but thought I’d mention it anyways
Ideally I’d like to have 2 days of conference and 2-3 days before/after for more relaxed socializing
Mini Collection—Non-typical EA Movement Building
Basically, these are ways of spreading EA ideas, philosophies or furthering concrete EA goals in ways that are different from the typical community building models that local groups use.
EA for non-EA People: External Movement Building by Danny Lipsitz
Community vs Network by David Nash
Question: What values do EAs want to promote?
Focusing on career and cause movement building by David Nash
Better models for EA development: a network of communities, not a global community by Konrad. Proposes network builders vs community builders because EA cannot be sustainable as a community alone. Suggests dividing up these responsibilities amongst the different community building orgs. Clarifies some commonly used terms.
Suggestions welcome!
This quote from Kelsey Piper:
How valuable is building a high-quality (for-profit) event app for future EA conferences?
There are 6 eag(x) conferences a year. this number will probably increase over time and more conferences will come up as EA grows- I’d expect somewhere between 80-200 EA-related conferences and related events in the next 10 years. This includes cause-area specific conferences, like Catalyst and other large events.
A typical 2.5 day conference with on average ~300 attendees spending 30 hours = 9,000 man-hours would be a range of 720,000-1,800,000 man hours over 10 years. Of this time, I’d expect 90% to be taken up doing meetings, attending events, eating etc. Of the remaining 10%, so 7,200-18,000 saving 1% of this time is in the range of 7,200- 18,000 hours or roughly seems pretty useful!
For reference, 1 year of work (a 40 hours work-week for 50 weeks) = 2000 hours.
Pricing estimate if we pay for an event conferencing app: Swapcard, recently used by CEA for EAGx events costs approximately USD$7 per user.
Using my previous estimate, the total cost over 10 years would be between USD $168,000-420,000 without any discounting. Discounting 50% for technology becoming cheaper, and charity discounts, we could conservatively say $84,000-$210,000 total cost.
Not sure what to do with this information, or how to compute the value of this money saved (assuming our benevolent EA ally / app creator gives us access for a heavily discounted price, otherwise the savings are not that important).
Given the pandemic, I would actually upgrade the potential cost effectiveness of this, because we can now add Student Summits and EAGxVirtuals as potentially regular events, bringing the total in a non-COVID year to up to 8 events.
Hm I think Swapcard is good enough for now, and I like it more than the Grip app. I think this comes down to what specific features people want in the conference app and why this would make things easier or better.
Of course it would be good to centralize platforms in the future (i.e. maybe the EA Hub also becomes a Conference platform), but I don’t see that being a particularly good use of time.
+1 the math there. How does building an app compare to throwing more resources at finding better pre-existing apps?
I’ll just add I find it kind of annoying how the event app keeps getting switched up. I thought Grip was better than whatever was used recently for EAGxAsia_Pacific (Catalyst?).
I think CEA has looked at a number of apps—it wold definitely be worth checking with them to see how many apps they’ve considered out of the total number of apps available, and possibly follow the 37% rule.
It seems plausible, though overall not that likely, to me that maybe the LessWrong team should just build our own conference platform into the forum. We might look into that next year as we are also looking to maybe organize some conferences.
That would be interesting! I’d be interested to see if that happens—I think there are probably a benefits from integration with the LW/EA Forum. In what scenario do you think this would be the most likely?
I think it’s most likely if the LessWrong team decides to run a conference, and then after looking into alternatives for a bit, decides that it’s best to just build our own thing.
I think it’s much more likely if LW runs a conference than if CEA runs another conference, not because I would want to prioritize a LW conference app over an EAG app, but because I expect the first version of it to be pretty janky, and I wouldn’t want to inflict that on the poor CEA team without being the people who built it directly and know in which ways it might break.
Quick BOTEC of person-hours spent on EA Job Applications per annum.
I created a Guesstimate model to estimate a total of ~14,000 to 100,000 person-hours or ~7 to 51 FTE are spent per year (90% CI). This comes to an estimated USD $ 320,000 to $3,200,000 unpaid labour time.
All assumptions for my calculations are in the Guesstimate
The distribution of effort spent by candidates is heavy-tailed; a small percentage of candidates may spend 3 to 10x more time than the median candidate.
I am not very good at interpreting the guesstimate, so if someone can state this better / more accurately than would be helpful
Keen to get feedback on whether I’ve over/underestimated any variables.
I’d expect this to grow at a rate of ~5-10% per year at least.
Sources: My own experience as a recruiter, applying to EA jobs and interviewing staff at some EA orgs.
Edited the unpaid labour time to reflect Linch’s suggestions.
I think
As a normal distribution between $20-30 is too low, many EA applicants counterfactually have upper middle class professional jobs in the US.
I also want to flag that you are assuming that the time is
but many EA orgs do in fact pay for work trials. “trial week” especially should almost always be paid.
Hi Linch, thanks for the input!
I’ll adjust the estimate a bit higher. In the Guesstimate I do discount the hours to say that 75% of the total hours are unpaid (trial week hours cone to 5% of the total hours).
I did not review the model, but only 75% of hours being unpaid seems much too low based on my experience having gone through the job hiring process (including later stages) with 10-15 EA orgs.
Okay, so I used a different method to estimate the total manhours and my new estimate is something like 60%. I basically assumed that 50% of Round 2 −4 in the application process is paid, and 100% of the work trial.
I expect that established / longtermist orgs are disproportionately likely to pay for work tests, compared to new or animal / GH&D orgs.
I think Josh was claiming that 75% was “too low”, as in the total % of unpaid hours being more like 90% or something.
When I applied to a bunch of jobs, I was paid for ~30 of the ~80 hours I spent (not counting a long CEA work trial — if you include that, it’s more like 80 out of 130 hours). If you average Josh and I, maybe you get back to an average of 75%?
*****
This isn’t part of your calculation, but I wonder what fraction of unique applicants to EA jobs have any connection to the EA community beyond applying for one job?
In my experience trying to hire for one role with ~200 applicants, ~1/3 of them neither had any connection to EA in their resumes nor provided further information in their applications about what drew them to EA. This doesn’t mean there wasn’t some connection, but a lot of people just seemed to be looking for any job they could find. (The role was more generic than some and required no prior EA experience, so maybe drew a higher fraction of outside applicants.)
Someone having no other connection to the EA community doesn’t mean we should ignore the value of their time, and the people who apply to the most jobs are likely to have the strongest connections, so this factor may not be too important, but it could bear consideration for a more in-depth analysis.
PROPOSAL: EA Career Coaches—quick evaluation
Experimenting to see what kind of feedback this gets and whether it’s useful to share very early stage thoughts publicly. If anyone is interested or already exploring this topic, feel free to reach out, I have written up a (slightly more indepth) proposal I can share.
Problem: There might be many people in EA that could benefit from career coaching.
Size: I estimate ~300 to 1000 (60% CI) people might be underconfident or less ambitious than they might be. 80K frequently mentions underconfidence. These are people with basic intro knowledge but are unable to make career progress due to miscalibration, lack of knowledge, negative feelings associated with networking / applying etc.
Tractability: Career coaches are very common & help people become confident and land dream jobs / improve their current career situation.
Neglectedness: Seems unlikely to me that existing coaches cover this need. I am also fairly confident that existing groups / CBs do not cover this need.
Proposal: An EA career coach who’s: Improving client’s calibration on good fit positions (both EA & non-EA) Giving practical advice & guidance (e.g. resumes, interviews, long-term career planning) Providing encouragement & boost self-confidence Helping clients maintain a healthy relationship to the job hunt / reduce stress, e.g. provide unique insights into EA job landscape
Key uncertainties: (* = most uncertain) Is there demand for this idea and how big?* Is that demand already being met? Is someone working on this going to significantly improve the chances that an individual makes positive progress in their career path? * Are there people or groups able and / or willing to provide this service?* (subquestion if people are willing but not able: Are the skills easily trainable?) Would people reach out for such a service (and be willing to pay) on their own?*
Next steps: (5-20 hrs) Talk to people (have emailed some EA coaches) Run experiments to gauge demand e.g. interest form for online career events (e.g. resume writing workshop) or starting a peer-to-peer career support group
An incomplete list of movement building terms
I plan to keep adding and refining this list over time, I’ve just put my current list here in case anyone else has ideas.
Movement Building: Any work to build a movement including but not limited to: community infrastructure, community building, cause, field or discipline development and outreach. Movement building does not need to involve using or spreading the EA brand.
Movement building is a subset of “Meta EA” or “meta altruism”.
Types of Movement Building:
Community Infrastructure: The development of community-wide products and services that help develop the community. Online examples include building wikis, forums, tools and websites. Offline examples include conferences, community houses, and regional networks.
Note: Some community infrastructure may be limited to certain subgroups within the community, such as events and services for leaders or affiliated organisations. Such events might still provide benefits to the wider community, especially when they improve coordination and communication, and where relevant should be considered as infrastructure.
Community Building: Influencing individuals to take actions based on the ideas and principles of the EA movement. This is often accomplished through the development of groups (local & online) organised by geography, shared interests, careers, causes and more. Local groups are the most common, but certain locations (e.g. “hub” cities like London) may also have subgroups that based cause or career.
Field or Discipline Development: Developing new or influencing existing academic disciplines or fields through the creation of new organisations, advocacy or funding academics to work in this field. Closely related to Professionalization.
Network Building: Developing the EA network to include non-EA actors, organisations and communities. See Community vs Network by David Nash and EA for non-EA People: External Movement Building by Danny Lipsitz.
Professionalization: Giving an occupation, activity, or group professional qualities. This can be done by creating a career out of, increasing the status of, raising the qualifications required for or improving the training given for a occupation, activity or group.
Other terms
CEA’s Funnel Model: A model of movement building which focuses on the different stages of involvement people have with EA, based off of corporate sales funnel models.
Community: A group of people connected by a set of shared values, norms, or beliefs.
Alternative definition: “A community is a group of humans with a shared identity who care for each other.”—Konrad
Ideology: A set of ideas and beliefs that represent a particular worldview.
Network: A group of people with varying degrees of connection to each other.
Organic Movement Growth: Movement growth that occurs organically, without explicit intentions (other than perhaps very broad actions like mass-targeted publications).
Social Movement: A group of people working to achieve a goal or set of goals through collective action. Differentiated from an intellectual movement because of the specification and emphasis on concrete actions.
Status: This was a post i’d drafted 4 years ago on climate change in EA. Not sure I stand by all of it, but I thought it might be worth sharing.
let’s make room for climate change
What this post is NOT saying:
* depriortise other cause areas
* redirect significant resources fromt other cause areas
* the average EA should go into climate change or become a climate scientist/advocate
* the average EA has comparative advantage in climate change
What this post IS saying
* having an EA org or projects related to climate change will be beneficial to EA
* certain climate solutions will be more tractable
* climate change as a cause area is much less uncertain than other cause areas, and interventions are also less uncertain
* get funding for this through funds outside of or adjacent to EA. funds which, counterfactually, would not have gone elsewhere
* treat climate change seriously as a GCR/x-risk multiplier
* show that EA has done its homework on climate change (whatever the results of that homework may be)
* attract people who are experts in the field to work on these issues (not redirect EA talent towards climate change)
Summary
This post is calling for the EA movement to become more climate change friendly. That is, to create space for potentially new orgs or EA-aligned projects working on climate change. This does not mean redirecting major resoures away, but rathre facilitating the inclusion of climate change in order to redirect non-EA resoures into EA. In the best case scenario, this helps broader norm change in the nonprofit landscape towards funding more evidence-backed and rigorous nonprofits. Potentially, this could also get more funding into different cause areas as donors are exposed to EA ideas (low but not insignificant chance).
While the movement as a whole should focus on the best solution on the margin, there will likely never be enough jobs for everyone, so focusing on other cause areas will help multiply counterfactual impact. Instead of leaving the movement or working outside of it, individuals are still thinking in EA terms and it is possible that people will understand that.
The main reasons for this are because:
· The current likelihood of climate change is
· Climate Change is an x-risk multiplier. It increases the chances of almost all known x-risks. This needs to be modeled by someone better at modeling, but some plausible scenarios.
* Increased chance of civil wars
* Increased chance of nuclear wars
* Increased chance of biological disasters
· Climate change itself may not cause human extinction, but it could non-negligibly set humanity back by hundreds, if not thousands of years
* destablizing currrent institutions and states
* in the event of a large part of the human popuulation being killed, we would lose cultural knowledge and the benefits of a fully globalized economy
* it all depends on your time scale. if your time scale is millions of years
* also depends on whether you care more about avoiding negative futures or more about 0 humans. difference betweeen centuries of low-quality vs 0 humans forever. low quality definitely seems worse, even if humanity eventually recovers, depending on the intensity of the bad lives post-apocalypse
· Climate Change is not funding constrained. A climate change EA org would be easily able to find funding elsewhere, thus not diverting EA funds and changing the current EA climate. It’s possible that just due to the sheer size of the funding landscape good climate change organizations may be orders of magnitude more effective than other causes. Redirecting those resource will be relatively easy.
· Climate Change is not talent constrained. Current EAs who don’t have experience in climate change will not need to switch career paths. Those who do can make us of their comparrative advantage. I predict if we signal an acceptance of climate change we will attract leading experts in the field due to EA’s track record and the lack of an effectiveness focused climate change org
· Many climate change efforts are exceptionally ineffective, because saying its not neglected doesnt mean its not neglected in the right places
· EA does not have a thorough understanding of climate change, we have based our understandings off of a handful of posts written by people with varying levels of familiarity with the cause. Ben’s post
· However there is a wealth of information and expertise which means that this is one of the few x-risk multipliers we actually have a good sense about
· We are more certain about the potential solutions for climate change than other cause areas, which means that direct work will be focused, quick and efficient.
· Climate change is analogous to EA because it’s
* scientific and evidence based
* considers generations beyond our own
· Climate Change is an easy cause to do experiments in to practice leading projects, skill-building and so on because it is well-known and widespread. Even if it is less effective than other interventions, it might be easier to complete and execute such issues
· Climate change has synergies with other EA cause areas like animal advocacy. Both can strengthen the case for the other. Farmed animals are a significant soure of GHG emissions, and this number is projected to rise in developing nations
Meta-level thought:
When asking about resources, a good practice might be to mention resources you’ve already come across and why those sources weren’t helpful (if you found any), so that people don’t need to recommend the most common resources multiple times.
Also, once we have an EA-relevant search engine, it would be useful to refer people to that even before they ask a question in case that question has been asked or that resource already exists.
The primary goal of both suggestions would be to make questions more specific, in-depth and hopefully either expanding movement knowledge or identifying gaps in knowledge. The secondary goal would be to save time!
Some ideas for improving or reducing the costs of failure transparency
This is an open question. The following list is intended to be a starting point for conversation. Where possible tried to make these examples as shovel-ready as possible. It would be great to hear more ideas, or examples of successfully implemented things.
Thanks to Abi Olvera, Nathan Young, Ben Millwood, Adam Gleave & Arjun Khandelwal for many of these suggestions.
Create a range of space(s) to discuss failure of any size.
I think the explicit intention of helping the community and providing relevant information is probably important to avoid goodharting.
Questions that could help you determine how valuable this mistake is to the wider community :
How generalizable was the failure?
(trying) to separate personal faults from external factors
What projects could you have done instead of this one?
Would you do the project again? (was it worth it)
Do you think your evaluation of the project is the same someone 1) working on you with it 2) a funder 3) recipients would agree with? How might they differ?
It would be especially valuable to have high-profile members of the EA community do this, since they have relatively less status to lose and their
Note that these spaces don’t have to all be public!
At EA conferences (this is more for signalling / setting norms)
A regular / semi-regular “Failed Projects” or “Things I changed my mind on” or “evolution in my thinking” kind of panels at EA Globals and other conferences
Asking EA public figures questions about failure at talks
At EA conferences or local groups: Events, workshops or meet-ups for people to share their thinking, changes in their thinking and mistakes and reflect on them together, collaboratively
Create committee(s) to evaluate failed projects
For larger projects with bigger stakes, it seems valuable to invest more resources into learning from it
Interviews with stakeholders & reading relevant documents & outputs
Aim to create a neutral, fair report which creates an accurate map of the problem
It seems plausible the EAIF would fund something like this
Pay grantees to follow-up on their projects
Could funders offer to pay an additional X dollars to grantees to get them to write up reflections or takeaways from their projects, successful or not? (This is probably more valuable for people being funded to work on very different kinds of projects, and who wouldn’t otherwise write them—e.g. not established organisations who’d spend time writing an annual report anyways)
Anonymous Mistake Reporting
Have a call for anonymous reports of failures that people might not want to report publicly (either their own or others)
Idea: EA Library
Many colleges and universities have access via their libraries to a number of periodicals, papers, journals etc. But once you graduate, you lose access
With something like sci-hub we don’t really need access to many things on the academic side.
But it seems valuable for EAs to not be pay-walled for access to various journals or news outlets (e.g. Harvard Business Review or Wall Street Journal or something) if they want to do resaerch (if there’s a sci-hub for stuff like that that could also work!)
We could probably develop this in a way where there are natural barriers to using it (e.g. core group of people are past EAG attendees, new members must be invited by at least 1 “core” member).
I have no clue what the cost for something like this would be, but it could be pretty easy to figure out by speaking to a university librarian or two! (I imagine probably in the ~$10,000-$100,000 range per year?
How important is it to measure the medium term (5-50 years) impact of interventions?
I think that taking the medium-term impact into account is especially lacking in the meta space, since building out infrastructure is exactly the kind of project that could take several years to set up with little progress before gains are made.
I’d also be interested in how many /which organisations plan to measure their impact on this 5-50 year timescale. I think it would be very interesting to see the impact of various GH&D charities on a 5 or 10 year timescale.
A Typology of EA Careers Advice
The Local Career Advice Network recently completed a pilot workshop to help group organiers develop and implement robust career 1-1 strategies. During this process we compiled all existing EA careers advice & strategy, and found several open questions. This post provides an overview of the different kinds of careers research one could do. We will write more posts trying to explain the value of the different kinds of research.
Movement-level research
This research identifies bottlenecks in top causes and makes recommendations on specific actions individuals can take to address them.
Risks: The EA movement does not have as much impact as it could, unaddressed bottlenecks impede progress on certain causes, community members don’t know what the top options are an settle for something less impactful.
EA examples : 80,000 Hours cause profiles, Animal Advocacy Careers skills profiles, Local Priorities Research (More on LPR)
Non-EA examples: Studies predicting which jobs will get automated
Individual-level research
This research idenitifes best practices, framworks and tips on how to have a successful, fulfilling career. This research could help them find a career that is the right choice for them: that is aligned with their values, that they can excel at, and that they are motivated to stay in in the long-term.
Risks: Causing harm by reducing an individual’s impact in the long-term, or pursuing a path where they don’t have a good personal fit. They might be turned away from the EA movement.
EA Examples: 80,000 Hours’ 2017 Career Guide and Career Profiles
Non-EA examples: So Good They Can’t Ignore You by Cal Newport
Advice intervention research
This research identifies interventions that can help achieve both movement-level or individual-level advice. Interventions prioritise
Risks: All of the above if it doesn’t balance between the two.
EA Examples: Animal Advocacy Careers is preregistering a study of their career 1-1 calls, Literature review on what works to promote charitable donations
Non-EA Examples: Research on the effectiveness of coaching/mentoring.
I think movement-level advice is most useful for setting movement-level strategy, rather than informing individual actions because personal fit considerations are quite important. However, I think this has the consequence that some paths are much more clearly defined than others, making it difficult for people who don’t have those interests to define a path.
Reasons for/against Facebook & plans to migrate the community out of there
Epistemitc Status: My very rough thoughts. I am confident of the reasons for/against, but the last section is mostly speculation so I won’t attempt to clarify my certainty levels
Reasons for moving away from Facebook
Facebook promotes bad discussion norms (see Point 4 here)
Poor movement knowledge retention
Irritating to navigate: It’s easy to not be aware that certain groups exist (since there are dozens) and it’s annoying to filter through all the other stuff in Facebook to get to them
Reasons against
Extremely high switching costs
start-up costs (see Neels’ comment)
harder to pay attention to new platform
easier to integrate with existing scoial media
Offputting/intimidating to newer members
Past attempts haven’t taken off (e.g. the EA London Discussion Board, but that was also not promoted super hard)
Existing online space (the Forum) is a bit too formal/initimidating
How would we make the switch? In order of increasing speculativeness
One subcommunity at a time. It seems like most EA groups are already more active in their spaces other than Facebook, but it would be interesting to see this replicated on the cause area level by understanding what the community members’ needs are and seeing if there’s a way to have alternatives.
Moving certain services found on Facebook to other sites: having a good opportunities board so people go to another place for ea jobs & volunteer opportunities, moving the editing & review group to the forum (?), making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new platform.
Encouraging ~100 active community members to not use Facebook for a week as an experiment and track the outcomes
Make the Forum less intimidating so people feel more comfortable posting (profile pictures? Heart reacts? Embedded discord server or other chat function? Permanent Walled Garden?)
Things I’ll be tracking that might update me towards how possible this is
LessWrong’s experience with the Walled Garden
The EA Hub is improving our Community Directory & introducing some other services in 2021 possibly including 1-1 Matching and an Opportunities Board.
Cause area Slacks
Effective Environmentalism Slack group (not very active right now, but we haven’t done a lot of active efforts to encourage people to use the Slack yet. Might do this later in the year).
IIDM & Progress Studies Slack
Changes in Forum culture over time
If there are any EA groups or subcommunities already moving away from Facebook, please let me know so I can track you :)
I want to emphasise this point, since I think it applies to both new and more experienced members. I personally find it quite high mental load to actively pay attention to communities on a new platform. Some of these are start-up costs (learning a new interface etc), but there are also ongoing costs of needing to check the new site, etc. And it is much easier to add something to an existing place I already check
I don’t think the Forum is likely to serve as a good “group discussion platform” at any point in the near future. This isn’t about culture so much as form; we don’t have Slack’s “infinite continuous thread about one topic” feature, which is also present on Facebook and Discord, and that seems like the natural form for an ongoing discussion to take. You can configure many bits of the Forum to feel more discussion-like (e.g. setting all the comment threads you see to be “newest first”), but it feels like a round peg/square hole situation.
On the other hand, Slack seems reasonable for this!
There is also a quite active EA Discord server, which serves the function of “endless group discussions” fairly well, so another Slack workspace might have negligible benefits.
Another possible reason against might be:
In some countries there is a growing number of people who intentionally don’t use Facebook. Even if their reasons for their decision may be flawed, it might make recruiting more difficult. While I perceive this as quite common among German academics, Germany might also just be an outlier.
I think the EA Hub is in a good position to grow and replace some of the functions that Facebook is currently being used for in the community.
Could regular small donations to Facebook Fundraisers increase donations from non-EAs?
The day before Giving Tuesday, I made a donation to a EA Facebook charity that had seen no donations in a few weeks. After I donated to about 3 other people donated within the next 2 hours (well before the Giving Tuesday start time). From what I remember, the total amount increased by more than the minimum amount and the individuals appeared not to be affiliated with EA, so it seems possible that this fundraiser might have somehow been raised to their attention. (Of course it’s possible that with Giving Tuesday approaching they would have donated anyway.)
However, it made think that regularly donating to fundraisers could keep them on people’s feeds inspire them to donate, and that this could be a pretty low-cost experiment to run. Since you can’t see amounts, you could donate the minimum amount on a regular basis (say every month or so—about $60 USD per year). The actual design of the experiment would be fairly straight forward as well: use the previous year as a baseline of activity for a group of EA organisations and then experiment with who donates, when they donate, and different donation amounts. If you want to get more in-depth you could also look at other factors of the individual who donates (i.e. how many FB friends they have).
Experimental design
Using EA Giving Tuesday’s had 28 charities that people could donate to. Of that, you could select 10 charities as your controls, and 10 similar charities (i.e. similar cause, intervention, size) as your experimental group, and recruit 5 volunteer donors per charity to donate once a month on a randomly selected day. They would make the donation without adding any explanation or endorsement.
Then you could use both the previous year’s data and the current year’s controlled charities to compare the effects. You would want to track whether non-volunteer donations or traffic was gained after the volunteer donations.
Caveats: This would be limited to countries where Facebook Fundraising is set up.
Reflections on making commiting to a specific career path
Imagine Alisha is making a decision whether to pursue job X or job Y. She is currently leaning in favor of job X 55% to 45%, so decides to pursue job X. Over the next couple years, Alisha gains knowledge and expertise as an Xer, and is passionate and excited by her job. She’s finding new opportunities and collaborations, and things are going well. But she often wonders if things would have gone even better if she went with job Y.
I believe that you get a lot more value from committing to one path / area and developing deep expertise & knowledge there, rather than hopping around for too long. There’s a lot of implicit knowledge you gain, and therefore a comparative advantage.
I think it’s hard to see the hidden uncertainties behind lots of (small and large) decisions when you make a decisive choice and ruthlessly prioritize. It’s easy to read more confidence into decisions than there is—partly because it’s just easier to process the world in black and white shades of grey.
And it can be really hard to live with those decisions, even once you’ve made them. I think you probably need (to some extent) to shut that part off for some time so you can actually double down and focus on one thing. I struggle with this a lot.
What I want to keep in mind, as a result of this:
check in with the people whose careers i’ve subconsciously modeled my plans off of and check how confident they were when they made the pivotal decisions (if they did make a pivotal decision at all) about how confident they were.
I should expect many people to be uncertain.
I expect many people didn’t have a master plan, but instead took advantage of interesting and good opportunities
I expect the best people are good at switching between exploring and exploiting systematically
I want to develop better ways of switching between explore & exploit, or not worrying that I’ll miss something and stay in the explore mode longer than I should
I want to introduce a periodic review to help feel better about exploiting (because I know I’ll have an opportunity to course correct)
I want to introduce periodic slack into my system to do exploration as needed
(H/T Steve Thompson for a good conversation that helped me crystalize some of this).
Project: More expert object-level observations & insights
Many subject matter experts with good object-level takes don’t post on the forum because they perceive the bar is too high
Other E.g. that I know personally : Founders of impactful charities don’t post regular updates on the progress their organizations are making, lessons they are learning, theory of change updates, how others can help etc.
People who aren’t naturally writers (e.g. they are doers and more on the ground building partnerships etc)
People who don’t realise they could add value to the community (because they are too busy to spend time in it and notice the biases or weakpoints)
+ more examples & reasoning listed here
You can 80⁄20 it much easier:
Hire someone to interview these folks regularly (e.g. a couple eveyr month), find their cool insights, and write up their responses in an engaging way with some nice infographics & pictures from their org or something (e.g. not just interview style). I’ve discussed doing this with @Amber Dawn before with the folks in my networks who I know have things to say.
If someone wants to fund this, reach out?
Also: Have someone go around at EAGs interviewing people and then writing up all their conversations
This could be a fun project for folks who want to just meet interesting people and learn about what EAs are working on
Do active outreach to these people and offer them 1-1 calls to brainstorm topics, and 2-3 rounds of feedback if they do write something (maybe something for the Forum team?)
More involved things:
Change the culture of the EA Forum so people feel less scared to post on it e.g. by creating “First Drafts”
Make it easy for people to record & upload podcasts or voice notes to the EA forum with autotranscription and maybe taking out the “ums” and “buts” (very low confidence, this could be terrible)
What are the low-hanging fruit or outliers of EA community building?
(where community building is defined as growing the number of engaged EAs who are likely to take medium-to-large sized actions in accordance to the EA values and/or framework. it could include group activities, events, infrastructure building, resource)
the EA community talks a lot about low-hanging fruits and the outlier interventions that are 100x or 1000x better than the next best intervention
it seems plausible that either of these exist for community building
Low hanging fruits
from working in the community building space for the last 2+ years, i have found what i believe are many low-hanging fruit (which are decently impactful) but no extreme outliers that are orders of magnitude more impactful than the next best thing
I think low hanging fruits are relatively neglected areas of community building
The biggest one that I observed is careers advice outside of 80K’s general scope is very neglected, and within those there are mostly similar effectiveness interventions (or at least not 100-1000x apart).
What other low-hanging fruit do you think there are?
Extreme Outliers
i would guess that any outlier interventions could fall into 1 of two categories (which obviously don’t pose undue risk to the community):
Intervention that is moderately to very good at achieving X (where X can be either recruitment, education, engagement or retention, see more), but also have the property of scaling very quickly (e.g. a web service, written resource or a skill that could be taught to many group organisers )
Intervention is very good at recruiting a few extremely engaged, aligned & talented people (the hits based model, where you have 99% failure and 1% success), or getting them engaged (I imagine there’s fewer education or retention interventions)
Do you know of clearly obvious outlier interventions ?
I think introductory fellowships are extreme outlier interventions. EA Philippines’ 8-week Intro to EA Discussion Group (patterned after Stanford’s Arete fellowship) in May-July 2020 was by far our best activity yet. 31 signed up and 15 graduated, and out of the graduates, I believe we’ve created the following counterfactual impact:
One became the president of our student chapter EA Blue
Another became a core team member of EA Blue
Two have since taken the GWWC pledge
Three have become new volunteers (spending ~1-2 hrs/week) for EA Philippines (we actually got two more volunteers aside from these three, but those two I would say were not counterfactual ones)
Helped lead to a few career plan changes (I will write a separate impact report about EA PH’s 2020, and can talk about this more there).
EA Blue is now doing an Introductory Fellowship similar to ours with 26 participants, which I’m a facilitator for, and I think we’re having similarly good results!
I don’t have an answer, but I’m curious—why don’t you publish it as a proper post?
This is a very rough post and I don’t know how much I would stick to this framing of the question if I spent more time thinking it over!
Makes sense, even though it feels alright to me as a post :)
I’d really like to see more answers to this question!
I was going to post something for careers week but it was delayed for various reasons (including the mandatory last minute rewrite). I plan to post it in the next couple of weeks.
Update: It’s posted! https://forum.effectivealtruism.org/posts/SWfwmqnCPid8PuTBo/monetary-and-social-incentives-in-longtermist-careers
CGD launched a Global Skills Partnership program to reduce brain drain and improve migration (https://gsp.cgdev.org/)
It would be interesting to think about this from the perspective of EA groups, where brain drain is quite common. Part of their solution is to offer training and recognized certifications to a broader group of people in the home country to increase the overall pool of talent.
I will probably add more thoughts in the coming days when I have time to read the case studies in more depth.
Collection of anecdotal evidence of EA career/impact frustrations
After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation by EA Applicant. Most upvoted post on the forum, sparked a lot of recent discussion on the topic. 8 commenters resonated with OP on the time investment and/or disappointment (1,2,3,4,5,6,7,8). There were 194 unique upvotes.
My mistakes on the path to impact by Denise Melchin. Another highly upvoted post talking about the emphasis on working at EA organisations and direct EA work. There were 161 unique upvotes. Resonated comments (1,2,3,4,5)
Effective Altruism and Meaning in Lifeby extra_ordinary. A personal account of the talent gaps, and why the OP moved away from this because too much of their self-worth was associated with success in EA-related things. 4 comments in support of the post. Resonated comments (1,2). There were 55 unique upvotes.
You could add this recent post to the list: https://forum.effectivealtruism.org/posts/ptFkbqksdPRNyzNBB/can-i-have-impact-if-i-m-average
EA’s Image Problem by Tom Davidson. 4 years old but the criticisms are still relevant. See also many comments.
I brainstormed a list of questions that might help evaluate how promising climate change adaptation efforts would be.
Would anyone have any additions/feedback or answers to these questions?
https://docs.google.com/document/d/19VryYtikXQEEOeXtjgApWWKoof3dRfQNVjza7HbnuHU/edit?usp=sharing
Is anyone aware of/planning on doing any research related to the expected spike in interest for pandemic research due to COVID?
It would be interesting to see how much new interest is generated, and for which types of roles (e.g. doctors vs researchers). This could be useful to a) identify potential skilled biosecurity recruits b) find out what motivated them about COVID-19 c) figure out how neglected this will be in 5-10 years
I’d imagine doing a survey after the pandemic starts to die down might be more valuable than right now (maybe after the second wave) so that we’re tracking the longer-term impact rather than the immediate reactions.
An MVP version could be just looking at application rates across a variety of relevant fields.
Having done some research on post-graduate education in the past, it’s surprisingly difficult to access application rates for classes of programs. Some individual schools publish their application/admission rates, but usually as advertising, so there’s a fair bit of cherry picking. It’s somewhat more straightforward to access completion rates (at least in the US, universities report this to government). However, that MVP would still be interesting with just a few data points: if any EAs have relationships to a couple relevant programs (in say biosecurity, epidemiology), it may be worth reaching out directly in 6-12 months!
A more general point, which I’ve seen some discussion of here, is how near-miss catastrophes prepare society for a more severe version of the same catastrophe. This would be interesting to explore both theoretically (what’s the sweet spot for a near-miss to encourage further work, but not dissuade prevention policies) and empirically.
One historical example might be, for example, does a civilization which experienced a bad famine experience fewer famines in a period following that bad famine? How long is that period? In particular, that makes me think of MichaelA’s recently excellent Some history topics it might be very valuable to investigate.
In the UK could you access application numbers with a Freedom of Information request?
Some thoughts on stage-wise development of moral circle
Status: Very rough, I mainly want to know if there’s already some research/thinking on this.
Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don’t know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
Anecdotally seems true from a number of EAs I’ve spoken to who’ve updated to longtermism over time.
For most people, changing one’s beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: “there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement.”
Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too “weird”
Further research on this topic that could verify my hypothesis:
Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
This is an interesting point I haven’t thought much about.
I think this is probably the strongest non-step-wise reason.
If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.