Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I share the view that this seems potentially really valuable. Anecdotally, I know an EA who seems like they could do well in roles at EA orgs, or could potentially rise to fairly high positions in government roles in a country thatās not a major EA hub. There are of course many consideration influencing their thinking about which path to pursue, but one notable one is that the latter just understandably sounds less fun, less satisfying over the long term, and more prone to value drift.
I think efforts to address this issue might ideally also try to address the issue that status, validation, etc. within the EA movement are easier to access by working at EA orgs than at other orgs, and probably especially hard to access by working at orgs outside the major EA hubs (e.g., a key department of a government agency in an Asia country rather than in the UK or US).
We tried to brainstorm some ideas for how EA in general could support people like this EA I know to happily pursue roles that where (by default) thereād be no EAs in their orgs and maybe only a few in their city/ācountry as well. Some (not necessarily good) ideas, from memory:
Have more EA conferences in these not-currently-EA-hubs, so that the people living there can sometimes get ābooster shotsā of EA interactions
Provide funding for these people to occasionally travel to EA conferences /ā EA hubs
Make the EA movement more geographically distributed, e.g. by some EA orgs moving to places that arenāt currently hubs
Some (also not necessarily good) ideas that come to mind now:
Support more EA community building in these areas
Support the creation of organisations like HIPE in these areas
This could be seen as supporting community building thatās more targeted in terms of sector/ācareer, yet not necessarily explicitly EA-branded. It could build a network of people with similar values and a desire to help each other, even if few/ānone explicitly identify as part of the EA movement.
(I donāt actually know much about HIPE)
Some sort of virtual community building stuff?
Things like the EA Anywhere group?
Things like online coworking spaces?
(Thereās obviously a lot that could be done in this broad bucket)
Efforts to just make EAs less concerned about status, validation, etc. within the EA movement (or more concerned about those things from outside the EA movement)
(No big ideas for this immediately come to my mind)
Efforts to just make status, validation, etc. make those things easier to access for people who work at non-EA orgs and outside of EA hubs
This could include EAs sharing info about a broader range of organisations, geographical areas, career paths, etc., so that more EAs can easily see why a wider range of things are impactful
Re: Make status easier accessible
One idea that just came up to me was making it easier to reap status benefits from the GWWC giving pledge, e.g. I feel kind of proud of seeing my name on this huge numbered list and being among the first ten thousand people to sign. Relatedly, Subreddits and Wikipedia Projects seem to actively use badges of honor to acknowledge things like being a donor, having helped with some task etc. Maybe we could have āPledgeā badges.
Another idea: getting access to people one holds in high regard could also be something to think about. One could promote speakers coming to local groups, or generally promote networking within the community more.
Another thought that came up: Not being chosen for 80,000Hoursā career coaching felt like it was a symptom of my relatively low value for the community (not saying there is room for improvement how they communicated that, was years ago). I imagine it feels similar for some others. Maybe having motivated volunteers taking up the rejected applicants would be a cheap way to signal āthere are people in the community that value you being here and trying to work out an EA career pathā?
That resonates with me.
And the mention of Wikipedia is interesting. When I was a pretty active Wikipedia editor, I indeed felt proud of and motivated by badge-type things (mainly ābarnstarsā, if I recall correctly), as well as by random people thanking me for contributions (either by clicking a button or by posting on my talk page).
Iād guess a lot of EAs have similar mindsets, motivational patterns, etc. to a lot of Wikipedia editors, so it does seem like it could be interesting to try to learn from how Wikipedia ārecruitsā, motivates, and retains editors.
Could you expand on what you mean by āMaybe we could have āPledgeā badgesā? E.g., where are you envisioning those badges being displayed? Are you envisioning them just being for taking the pledge, or also for other actions (e.g., recording donations, hitting some milestone in donations, being in the first 10,000 members, a badge another pledger can give you to say you helped them decide where to give...)?
(Your other ideas also seem potentially interesting, but I donāt have anything in particular to say about them :) )
I thought about peopleās forum accounts. There are also the EA hub accounts, but I basically never open it, not sure about others. Iād probably do it similar to Wikipedia (e.g. here), just having a small icon for the pledge and when you hover on it āGivingWhatWeCan member since April 2nd, 2020ā. I didnāt think about other ideas, e.g. being helpful for a person deciding on a donation! I like the idea. One worry that comes up is that it could get a bit cluttered. Also, something in me feels a bit awkward when proudly displaying something, like I could become the target of the bullies of my highschool for feeling ātoo coolā. The GWWC pledge is already so socially accepted as something cool that I donāt feel this in that case.
Yeah, I think this ideaāand other things in the same neighbourhoodāis worth considering.
One thing worth mentioning is that GWWC already have badges you can display on websites, as well as Facebook photo frames. (This is where I found them.) So I think the intervention here wouldnāt be creating them, but rather:
getting the EA Forumāand maybe other sitesāto have a clearly visible option for putting a badge there if one is a GWWC member
normalising using them
E.g., by directly talking to a few people about using them, and making a public statement to let people know about the idea
maybe creating variants
I think it could be worth talking to people like Luke Freeman (whoās head of GWWC) and/āor Aaron Gertler (the lead Forum moderator) about this.
See also the post EA jobs provide scarce non-monetary goods, which probably influenced the views I expressed here but which Iād forgotten about till recently.
I was just re-reading the transcript of the 80k interview with Ben Todd from November 2020 and saw that that includes a section thatās relevant to what I was saying here, which Iāll quote below in case itās of interest to any future readers:
(Iād heard that episode back in November 2020, so it may have been one of many influences informing my comment.)
I also made a tag this morning for posts relevant to Working at EA vs Non-EA Orgs (and tagged this post), so readers interested in this topic may be interested in those posts as well.
An impression after skimming this post (not well thought through; do point out what I missed):
Some of the tentative project ideas listed are oriented around extending EAās reach via new like-minded groups who will share our values and strategies.
Sentences that seemed to be supporting this line of thinking:
Iām unsure how much I misinterpreted specific project ideas listed in this post.
Leaving that aside, I generally worry about encouraging further outreach focused on creating like-minded groups of influential professionals (and even more about encouraging initiators to focus their efforts on making such groups look āprestigiousā). I expect that will discourage efforts in outreach to integrate importantly diverse backgrounds, approaches, and views. I would expect EA field builders to involve fewer of the specialists who developed their expertise inside a dissimilar context, take alternative approaches to understanding and navigating their field, or have insightful but different views that complement views held in EA.
A field builder who simply aims to increase EAās influence over decisions made by professionals will tend to select for and socially reward members that line up with their values/ācause prio/āstrategy as a default tactic, I think. Inversely, taking the tactic of connecting EAs who like to talk with other EAs who are climbing similar career ladders leads to those gathered themselves agreeing to and approving each other more for exerting influence in stereotypically EA ways. Such group dynamics can lead to a kind of impoverished homogenisation of common knowledge and values.
I imagine a corporate, academic, or bureaucratic decision maker getting involved in an EA-aligned group and consulting their collaborators on how to make an impact. Given that theyāre surrounded by like-minded EAs, they may not become aware of shared blindspots in EA. Conversely, theyād less often reach out and listen attentively to outside stakeholders who can illuminate them on those blindspots.
Decision makers who lose touch with other important perspectives will no longer spot certain mistakes they might make, and may therefore become (even more) overconfident about certain ways of making impact on the world. This could lead to more āsuperficially EA-goodā large-scale decisions that actually negatively impact persons far removed from us.
In my opinion, it would be awesome if
along with existing field-building initiatives focused on expanding the influence of EA thought,
we encourage corresponding efforts to really get in touch and build shared understandings with specialised stakeholders (particularly, those with skin in the game) who have taken up complementary approaches and views to doing good in their field.
Some reasons:
Dedicated EA field builders seem to naturally incline towards type 1 efforts. Therefore, itās extra important for strategic thinkers and leaders in the EA community to be deliberate and clear about encouraging type 2 efforts in the projects they advise.
1 is challenging to implement but EA field builders have been making steady progress in scaling up initiatives there (e.g. staff at Founderās Pledge, Global Priorities Institute, Center for Human-Compatible AI).
2 seems much more challenging intellectually. They require us to build bridges that allow EA and non-EA-identifying organisations to complement each other: complex, nuanced perspectives that allow us to traverse between general EA principles and arguments, and the contextual awareness and domain-specific know-how (amongst others) of experienced specialists. I have difficulty recalling EA initiatives that were explicitly intended for coordinating type 2 efforts.
At this stage, I would honestly prefer if field builders start paying much deeper attention to 2. before they go out changing other peopleās minds and the world. Iām not sure how much credence to put in this being a better course of action though. I have little experience reaching out to influential professionals myself. It also feels Iām speculating here on big implications in a way that seems unnecessary or exaggerated. Iād be curious to hear more nuanced arguments from an experienced field-builder.
Yeah, I agree that there would be significant benefits to trying to set up another academic research institute at a university more focused on economics.
Same here.
The idea of āacademic institutes set up by EAs in disciplines such as psychology and historyā also sounds potentially exciting to me. And I wrote some semi-relevant thoughts in the post Some history topics it might be very valuable to investigate (and other posts tagged History may be relevant too).
Agreed. The University of Chicago ā with its Becker Friedman Institute, Center for Decision Research, broad EA community, and generous economics funders ā could be a promising option.
Definitely agree with this, as someone currently at UChicago! The Center for Radical Innovation for Social Change (RISC) recently put out a call for animal welfare proposals and Steve Levitt has connections to Schmidt Futures (an EA-adjacent philanthropic initiative), so that could be a promising place to start.
Thank you for sharing! I hadnāt looked deeply into RISCās work before ā and very helpful to know about Levittās ties to Schmidt Futures.
This seems like a good idea to me. And the second idea seems to me like a potential Task Y, meaning something which has some or all of the properties:
āTask Y is something that can be performed usefully by people who are not currently able to choose their career path entirely based on EA concerns*.
Task Y is clearly effective, and doesnāt become much less effective the more people who are doing it.
The positive effects of Task Y are obvious to the person doing the task.ā
Relatedly, that second idea also seems like something anyone could just start and provide value in right awayāno need for permission, special resources, or unusual skills. (My local EA group actually discussed similar things previously in the context of climate change, and took some minor actions in this direction.)
Iāve just made a shortform post on Some ideas for projects to improve the long-term future. I brainstormed the ideas before seeing this post, but this post is part of what prompted me to share the ideas publicly. And the shortform is only moderately rather than massively long, so Iāll copy the whole thing below rather than just linking to it. (Maybe thatās a bit weird? If so, sorry!)
---
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. Iām also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck(s) to executing them are the right person/āpeople, buy-in from the right existing organisation, or funding.
Iām not expecting to execute these ideas in the near-term future myself, so if you think one of these ideas sounds promising and relevant to your skills, interests, etc., please feel very free to explore the idea further, to comment here, and/āor to reach out to me to discuss it! [If commenting, please comment on the shortform version of this, so centralise discussion there.]
Something along the lines of compiling a large set of potentially promising cause areas and interventions; doing rough Fermi estimates, cost-effectiveness analyses, and/āor forecasts; thereby narrowing the list down; and then maybe gradually doing more extensive Fermi estimates, cost-effectiveness analyses, and/āor forecasts
This is somewhat similar to things that Ozzie Gooen, NuƱo Sempere, and Charity Entrepreneurship have done or are doing
Ozzie also discusses some similar ideas here
So itād probably be worth talking to them about this
Something like a team of part-time paid forecasters, both to forecast on various important questions and to be āon-callā when it looks like a catastrophe or window of opportunity might be looming
I think I got this idea from Linch Zhang, and it might be worth talking to him about it
80,000 Hours-style career reviews on things like diplomacy, arms control, international organisations, becoming a Russia/āIndia/āetc specialist
Some discussion here
Could see if 80k would be happy to supervise someone else to do this
Could seek out EAs or EA-aligned people who are working full-time in related areas
Organisations like HIPE, CSET, and EA Russia might have useful connections
I might be open to collaborating with someone on this
Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI
This might allow them to complete additional valuable projects
This also might help the research or writing assistants build career capital and test fit for valuable roles
Maybe BERI can already provide this?
Itās possible itās not worth being proactive about this, and instead waiting for people to decide they want an assistant and create a job ad for one. But Iād guess that some proactiveness would be useful (i.e., that there are cases where someone would benefit from such an assistant but hasnāt thought of it, or doesnāt think the overhead of a long search for one is worthwhile)
See also this comment from someone who did this sort of role for Toby Ord
Research or writing assistance for certain independent researchers?
Ops assistance for orgs like FHI?
But I think orgs like BERI and the Future of Humanity Foundation are already in this space
Additional āResearch Training Programsā like summer research fellowships, āEarly Career Conference Programmesā, internships, or similar
Probably best if this is at existing orgs
Could perhaps find an org that isnāt doing this yet but has researchers who would be capable of providing valuable mentorship, suggest the idea to them, and be or find someone who can handle the organisational aspects
Something like the Open Phil AI fellowship, but for another topic
In particular, something that captures the good effects a āfellowshipā can have, beyond the provision of funding (since there are already some sources of funding alone, such as the Long-Term Future Fund)
A hub for longtermism-relevant research (or a narrower area, e.g. AI) outside of US and UK
Perhaps ideally a non-Anglophone country? Perhaps ideally in Asia?
Could be a new organisation or a branch/āaffiliate of an existing one
Thereās some relevant discussion here, here, here, and I think here (though I havenāt properly read that post)
Found an organization/ācommunity similar to HIPE and/āor APPGFG, but in countries other than the UK
Iād guess itād probably be easiest in countries where there is a substantial EA presence, and perhaps easier in smaller countries like Switzerland rather than in the US
Why this might/āmight not be good:
I donāt know a huge amount about HIPE or APPGFG, but from my limited info on those orgs they seem valuable
Iād guess that thereās no major reason something similar to HIPE couldnāt be successfully replicated in other countries, if we could find the right person/āpeople
In contrast, Iād guess that there might be more barriers to successfully replicating something like APPGFG
E.g., most countries probably donāt have an institution very similar to APPGs
But I imagine something broadly similar could be replicated elsewhere
Potential next steps:
Talk to people involved in HIPE and APPGFG about whether they think these things could be replicated, how valuable they think thatād be, how theyād suggest it be done, what countries theyād suggest, and who theyād suggest talking to
Talk to other EAs, especially outside of the UK, who are involved in politics, policy, and improving institutional decision-making
Ask them for their thoughts, who theyād suggest reaching out to, and (in some cases) whether they might be interested in collaborating on this
I also had some ideas for specific research or writing projects, but Iām not including them in this list
Thatās partly because I might publish something more polished on that later
Itās mostly because people can check out A central directory for open research questions for a broader set of research project ideas
See also Why you (yes, you) should post on the EA Forum
See also:
Possible gaps in the EA communityāEA Forum
Get InvolvedāEA Forum
The views I expressed here are my own, and do not necessarily reflect the views of my employers.
Thanks for this post! I think I basically share the view that all of those prompts are useful and all of those āgapsā are worth seriously considering. Iāll share some thoughts in separate comments.
(FWIW, I think maybe the idea I feel least confident is worth having an additional person focus ~full-time onāconsidering what other activities are already being doneāis creating āsome easy way for someone whoās about to make their yearly donation to chat to another person about it.ā)
Regarding influencing future decision-makers
Both of those claims match my independent impression.
On the first claim: This post using neoliberalism as a case study seems relevant (I highlight that mainly for readers, not as new evidence, as I imagine that article probably already influenced your thinking here).
On the second claim: When I was a high school teacher and first learned of EA, two of the main next career steps I initially considered were:
Try to write a sort of EA textbook
Try to become a university lecturer who doesnāt do much research, and basically just takes on lots of teaching duties
My thinking was that:
Iād seen various people argue that itās a shame that so many world-class researchers have to spend much of their time teaching when that wasnāt their comparative advantage (and in some cases they were outright bad at it)
And Iād also heard various people argue that a major point of leverage over future leaders may be influencing what ideas students at top unis are exposed to
So it seemed like it might be worth considering trying to find a way to specialise in taking teaching load off top researchersā plates while also influencing future generations of leaders
I didnāt actually look into whether jobs along those lines exist. I considered that maybe, even if they donāt exist, one could be entrepreneurial and convince a uni to create one, or adapt another role into that.
Though an obstacle would probably be the rigidity of many universities.
I ultimately decided on other paths, partly due to reading more of 80kās articles. And I do think the decisions I made make more sense for me. But reading this post has reminded me of those ideas and updated me towards thinking it could be worth some people considering the second one in particular.
I feel quite good about the ideas in this sectionāIād definitely be excited for one or more things along those lines to be done one or more people who are good fits for that.
Some of those activities sound like they might be sort-of similar to some of the roles people involved in other EA education efforts (e.g., Students for High-Impact Charity, SPARC) and Effective Thesis have played. So maybe itād be valuable to talk to such people, learn about their experiences and their perspectives on these ideas, etc.
Misc small comments
This does seems like a good idea to me, but I think Generation Pledge might already be doing something like that? (That said, I donāt know much about them, and I donāt necessarily think that one org doing ~X means no other org should do ~X.)
Also, for people thinking about this broader idea of potentially setting up pledges (or whatever) that cover things GWWC isnāt designed for, it may be useful to check out A List of EA Donation Pledges (GWWC, etc).
I know very little about Animal Advocacy Careers, but this sounds like the sort of thing they might do? And if they donāt do it, then maybe they could start doing so for the animal space (which could be useful directly and also could provide a model others could learn from)? And if they raise strong specific reasons to be inclined against doing that (rather than just reasons why itās not currently their top priority), that could be useful to learn from as well.
Yeah, I think itād be pretty terrible if people took EAās focus on prioritisation, critical thinking, etc. as a reason to not raise ideas that might turn out to be uninteresting, low-quality, low-priority, or whatever. It seems best to have a relatively low bar for raising an idea (along with appropriate caveats, expressions of uncertainty, etc.), even if we want to keep the bar for things we spend lots of resources on quite high. Weāll find better priorities if we start with a broad pool of options.
(See also babble and prune [full disclosure: I donāt know if Iāve actually read any of those posts].)
(Obviously some screening is needed before even raising an ideaāwe wonāt literally say any random sequence of syllables, and we should probably not bother writing about every idea that seemed potentially promising for a moment but not after a minute of thought. But it basically seems)
I also think charity science might have tried getting people to pledge in their wills.
A long quibbly tangent
Iād say thereās a >50% chance that this would indeed be good, and that itās plausible itād be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:
You didnāt say what you meant by wellbeing. A decision maker might say āwellbeingā and mean only the wellbeing of humans, or of people in countries like theirs (e.g., predominantly English-speaking liberal democracies), or of people in their country, or of an in-group of theirs within their country (e.g., people with the same political leaning or race as them).
This could be because they explicitly believe that only those people are moral patients, or just because thatās who they implicitly focus on.
If the decision makers do have a narrow subset of all moral patients in mind when they they think about increasing wellbeing, would probably at least reduce the benefits of decision makers having that as their main criterion. It might also lead to that criterion being net harmful, if it means people are consequentialist altruists for one group only, having stripped away the norms and deontological constraints that often help prevent certain bad behaviours.
Maybe this is just a nitpick, as you could just edit your statement to incorporate some sort of impartiality. But then youād have to grapple with exactly how to do thatādo we want the criteria decision makers use to come pre-loaded with our current best guesses about moral patienthood and weights? Or with some particular of handling moral uncertainty? Or with some general principles for thinking about how to handle moral uncertainty?
I have an intuition that just making people more consequentialist and more altruistic-in-some-sense, without also making them more rational, reflective, cautious, etc., has a decent chance of being harmful. I think the (overlapping) drivers of this intuition are:
The fact doing that would move a seemingly important variable into somewhat uncharted territory, so we should start out pretty uncertain about what outcomes it would have, and thus predict a nontrivial chance of fairly bad outcomes
The various potential ways people have suggested naive consequentialism could cause harms (even from a consequentialist perspective)
There seeming to have been some historical cases where people have been mobilised to do bad things by consequentialist and altruistic-in-some-sense arguments (āfor the greater goodā)
A sort of Chestertonās fence /ā Secrets of Our Success-style argument for thinking very carefully before substantially changing anything that currently seems like a major part of how the world runs (even if it seems at first glance like the consequences of the change would be good)
[The above statements of mine are pretty vague, and I can try to elaborate if thatād be useful.]
So Iād favour thinking more about precisely what sort of changes we want to make to future decision-makersā values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts.
And that generic āmore research neededā statement, Iād favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that.
The following posts and their comment sections contain some relevant prior discussion:
Everyday Longtermism
Especially the section Safeguarding against naive utilitarianism, which presents a model/āgraph that I think is very interesting and helpful
Improving the future by influencing actorsā benevolence, intelligence, and power
...but, I think all of this might be pretty much just a tangent. Thatās because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.