I recently graduated with a masterâs in Information Science. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
Career-wise, I am broadly interested in x/âs-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
I think it would have been better if you distilled your responses; much of the 80K career sheet is trying to guide you towards next steps and clarify your priorities and preferences, so the initial set of questions may be kind of redundant. The post right now is kind of hard to parse.
If I had to guess, this may be the reason behind the downvotes, although I am unsure.
I see somewhere around 4-6 career directions right now. Since you have a few years of financial runaway and since you stated that âExploration. I donât know what Iâm going to do as a career,â it might be worth meticulously planning out the next 6-12 months to explore the different options you are considering.
SWE: do you have prior coding experience? If yes, how did you like programming and how good were you at it? If not, then have you checked short programs which will help you learn the basics of programming quickly and also gauge if you enjoy and are adept at it?
Being a SWE is more than being a programmer, but programming is a necessary first step.
Safety: Are you interested in Technical safety? If yes, do you enjoy programming, math, and research to a considerable degree? Are you also open to policy/âgovernance roles? What about being an operations person involved in a Safety org?
Journalism: Do you have prior experience with and enjoy research and writing? If not, maybe writing some sample pieces and getting feedback from friends/âstrangers who will be blunt about the quality and depth of your writing would help.
Landlord/âpersonal trainer/âpsychology: These might be the easiest for you given your financial situation and because you already have relevant work experience. That said, since effective giving will be your primary pathway to impact in this case:
It would be worth spending lots of time learning about effective giving,
Choosing which cause/âinterventions you want to donate to, and
Maximizing the amount of money you can donate.
How did I miss this update? Either way, thank you for sharing!
What happened to US Policy Careers?
They had several in-depth, informative articles. Shame if they are off the Forum and there is no way to access them.
akashâs Quick takes
Minor nitpick: âNEOs (objects smaller than asteroids)â
The definition of NEOs here seems wrong. Wouldnât it be more accurate to call them âTiny NEOs?â The current definition makes it sound as if asteroids arenât NEOs, but most NEOs are asteroids.
âHold fire on making projectionsâ is the correct read, and I agree with everything else you mention in point 2.
About point 1 â I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter arenât even clear. I thought you were stating something akin to âxyz are going to happen,â but re-reading your initial post, I believe I misjudged.
I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/âhelpful/âetc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can.
On the other hand, I think such takes could also contribute to something I would call a ânegative memetic spiral.â In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I canât help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly.
It is possible that nothing consequential happens. Peopleâs mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/âcommunity. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?).
Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the boardâs decision wasnât over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didnât lead to Altmanâs ousting. There is also somewhat of a shift back to the âcenter,â at least on Twitter, as there are quite a few reasonable, level-headed takes on what happened and also on EA. I donât know about the mood in the Bay though, since I donât live there.
I am unsure if I am expressing my point well, but this is my off-the-cuff take on your off-the-cuff take.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I donât want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a âcondensedâ or âextendedâ emoji palette? Either way, just my two cents.
Couldnât the comment section under the episode announcement posts (like this one) serve the same purpose? Or are you imagining a different kind of discussion thread here?
The closest would be CEAâs communication team, but as you point out: âitâs not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is.â
I think itâd be challenging (and not in good taste) for CEA to craft responses on behalf of the entire EA community; it is better if individual EAs critique articles which they think misrepresents ideas within the movement.
I see the same recycled and often wrong impressions of EA far too often, so I appreciate you taking the time and doing this!
Thank you for sharing your impressions! Some comments and questions:
Does longtermist institutional reform count as systemic change?
Meta-question: What is systemic change? How do you define it?
I think this a term that has become memetically dominant in the Left and has lost its meaning because it is used far too often and casually. So, now whenever people mention that term, I am not quite sure if I know what they mean by it.
I think one speculative reason why longtermist circles donât discuss concerns like the ones you raise is because of a somewhat prevalent belief that the post-scarcity utopia will happen soon after AGI. In a nutshell: AGI will happen very soon, the creation of AGI will lead to ASI (or AGI+) fairly quickly, and if this whatchamacallit is sufficiently aligned, it will solve all our problems.
Even if an individual somewhat subscribed to this notion, they may not think about most present concerns as they would all seem trivial. After all, they will soon be âsolvedâ in the post-AGI world.[1]
- ^
I donât think professional longtermist organizations operate on this belief or even entertain it.
I wholeheartedly agree with points 2 and 3, but I donât understand point 1.
I donât know much about Benjamin Lay, but casually glancing through his Wikipedia, it seems that his actions were morally commendable and supererogatory. Is the charge that he could have picked his fights/âapproach to advocacy more tactfully?
...weâre not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
I feel it is important to mention that this isnât supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
...seeing that the Columbia EA club pays its executives so much...
To the best of my knowledge, I donât think Columbia EA gives out salaries to their âexecutives.â University group organizers who meet specific requirements (for instance, time invested per week) can independently apply for funding and have to undergo an application and interview process. So, the dynamics you describe in the beginning would be somewhat different because of self-selection effects; there isnât a bulletin board or a LinkedIn post where these positions are advertised. I say somewhat because I can imagine a situation where a solely money-driven individual gets highly engaged in the club, learns about the Group Organizer Fellowship, applies, and manages to secure funding. However, I donât expect this to be that likely.
...you are constantly being nudged by your corrupted hardware to justify spending money on luxuries and conveniences.
For group funding, at least, there are strict requirements for what money can and cannot be spent on. This is true for most university EA clubs unless they have an independent funding source.
All that said, I agree that ânotably large amount[s] of moneyâ for university organizers is not ideal.
I disagree-voted and briefly wanted to explain why.
âsome people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.â
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I donât think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the âlocal expertâ model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different âexpertsâ spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
âAnd if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.â
I think assumes that people wonât be put off by the weirdness by, letâs say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I donât know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they donât have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I donât think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I donât think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance, why AI alignment could be hard with modern deep learning, but they donât need to read the 80K career profile on Safety if they donât want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/âlongtermist/âAI jobs on 80K or the EA Opportunity board and question why those topics werenât introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/âupdating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that âhey, you can also work for the government, help shape policy on bio-risks, and have a huge impactâ is an important one not to leave out.
I agree. I was imagining too rigorous (and narrow) of a cause prioritization exercise when commenting.
I donât! I meant to say that students who have mental health concerns may find it harder to do cause prioritization while balancing everything else.
I gather the OP wants something thatâs more just an extension of âdeveloping better ways of thinking and forming opinionsâ about causes, and not quashing peopleâs organic critical reflections about the ideas they encounter.
I was unsure if this is what OP meant; if yes, then I fully agree.
First, I am sorry to hear about your experience. I am sympathetic to the idea that a high level of deference and lack of rigorous thinking is likely rampant amongst the university EA crowd, and I hope this is remedied. That said, I strongly disagree with your takeaways about funding and have some other reflections as well:
âBeing paid to run a college club is weird. All other college students volunteer to run their clubs.â
This seems incorrect. I used to feel this way, but I changed my mind because I noticed that every âseriousâ club (i.e., any club wanting to achieve its goals reliably) on my campus pays students or hires paid interns. For instance, my university has a well-established environmental science ecosystem, and at least two of the associated clubs are supported via some university funding mechanism (this is now so advanced that they also do grantmaking for student projects ranging from a couple thousand to a max of $100,000). I can also think of a few larger Christian groups on campus which do the same. Some computer science/âdata-related clubs also do this, but I might be wrong.
Most college clubs are indeed run on a volunteer basis. But most are run quite casually. There is nothing wrong with this; most of them are hobby-based clubs where students simply want to create a socially welcoming atmosphere for any who might be interested. They donât have weekly discussions, TA-like facilitation, socials/âretreats, or, in some cases, hosting research/âinternship programs. In this way, EA clubs are different because they arenât trying to be the âletâs get together and have funâ club. I almost see university EA clubs as a prototype non-profit or a so-so-funded university department trying to run a few courses.
In passing I should also mention that it is far more common for clubs to get funding for hosting events, outreach, buying materials, etc. My guess is that in these cases if more funding were available, then students running those clubs would also get stipends.
âGetting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid.â
My experience has been the opposite of yours. Before getting paid, organizing felt like a distraction from more important things; there was always this rush to wrap up tasks; I enjoyed organizing but always felt somewhat guilty for spending time on it. These feelings vanished after getting funded. I (at least) doubled the amount of time I spent on the club, got more exposed to EA, got more organized with the meetings/âdeadlines, and I feel that I have a sense of responsibility for running this project the best I can.
Turn the University Group Organizer Fellowship into a need-based fellowship.
I am uncertain about this. I think a better and simpler heuristic is that if people are working diligently for x hours a week, then they should be funded for their labor.
âIf the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing peopleâs commitment to quality community building, then reduce funding...â
I agree with this. Funding being given out could be somewhat reduced and I feel it would be equally as impactful as it is now, but I am keen to see the results of the survey.
âI am very concerned with just how little cause prioritization seems to be happening at my university group.â
At least for university groups, maybe this is the wrong thing to be concerned about. It would be better if students could do rigorous cause-prioritization, but I think for most, this would be quite challenging, if not virtually impossible.
The way I see it, most university students are still in the formative stages of figuring out what they believe in and their reasons for doing so. Almost all are in the active process of developing their identity and goals. Some have certain behavioral traits that prevent them from exploring all options (think of the shy person who later went on to become a communicator of some sort). All this is sometimes exacerbated by mental health problems or practical concerns (familial duties, the need to be financially stable, etc.).
Expecting folks from this age group to perform cause prioritization is a high bar. I am sure some can do it, but I wouldnât have been able to. Instead, I think itâd be better if university EA groups helped their members understand how to make the best possible bet at the moment to have a pathway to impact. For instance, I hope that most students who go through the fellowship:
â Develop better ways of thinking and forming opinions
â Be more open-minded /â have a broad sphere of concern
â Take ideas seriously and act on them (usually by building career capital)
â Play the long game of having a high-impact career
Now, this likely doesnât happen to the best possible degree. But I think that all this and more, in combination, would help most in refining their cause prioritization over the years and setting themselves up to have a rewarding and impactful career.
Maybe this is what you meant when you were expressing your concerns, in which case, sorry for the TED talk and I wholeheartedly agree.
What do you think is the reason behind such a major growth? What are they doing differently that GWWC or other EA orgs could adopt?