I am a third-year grad student, now studying Information Science, and I am hoping to pursue full-time roles in technical AI Safety from June â25 onwards. I am spending my last semester at school working on an AI evaluations project and pair programming through the ARENA curriculum with a few others from my university. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
Career-wise, I am broadly interested in x/âs-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
âHold fire on making projectionsâ is the correct read, and I agree with everything else you mention in point 2.
About point 1 â I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter arenât even clear. I thought you were stating something akin to âxyz are going to happen,â but re-reading your initial post, I believe I misjudged.
I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/âhelpful/âetc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can.
On the other hand, I think such takes could also contribute to something I would call a ânegative memetic spiral.â In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I canât help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly.
It is possible that nothing consequential happens. Peopleâs mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/âcommunity. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?).
Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the boardâs decision wasnât over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didnât lead to Altmanâs ousting. There is also somewhat of a shift back to the âcenter,â at least on Twitter, as there are quite a few reasonable, level-headed takes on what happened and also on EA. I donât know about the mood in the Bay though, since I donât live there.
I am unsure if I am expressing my point well, but this is my off-the-cuff take on your off-the-cuff take.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I donât want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a âcondensedâ or âextendedâ emoji palette? Either way, just my two cents.
Couldnât the comment section under the episode announcement posts (like this one) serve the same purpose? Or are you imagining a different kind of discussion thread here?
The closest would be CEAâs communication team, but as you point out: âitâs not desirable to have a big comms. function that speaks for EA and makes the community more formal than it is.â
I think itâd be challenging (and not in good taste) for CEA to craft responses on behalf of the entire EA community; it is better if individual EAs critique articles which they think misrepresents ideas within the movement.
I see the same recycled and often wrong impressions of EA far too often, so I appreciate you taking the time and doing this!
Thank you for sharing your impressions! Some comments and questions:
Does longtermist institutional reform count as systemic change?
Meta-question: What is systemic change? How do you define it?
I think this a term that has become memetically dominant in the Left and has lost its meaning because it is used far too often and casually. So, now whenever people mention that term, I am not quite sure if I know what they mean by it.
I think one speculative reason why longtermist circles donât discuss concerns like the ones you raise is because of a somewhat prevalent belief that the post-scarcity utopia will happen soon after AGI. In a nutshell: AGI will happen very soon, the creation of AGI will lead to ASI (or AGI+) fairly quickly, and if this whatchamacallit is sufficiently aligned, it will solve all our problems.
Even if an individual somewhat subscribed to this notion, they may not think about most present concerns as they would all seem trivial. After all, they will soon be âsolvedâ in the post-AGI world.[1]
- ^
I donât think professional longtermist organizations operate on this belief or even entertain it.
I wholeheartedly agree with points 2 and 3, but I donât understand point 1.
I donât know much about Benjamin Lay, but casually glancing through his Wikipedia, it seems that his actions were morally commendable and supererogatory. Is the charge that he could have picked his fights/âapproach to advocacy more tactfully?
...weâre not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
I feel it is important to mention that this isnât supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
...seeing that the Columbia EA club pays its executives so much...
To the best of my knowledge, I donât think Columbia EA gives out salaries to their âexecutives.â University group organizers who meet specific requirements (for instance, time invested per week) can independently apply for funding and have to undergo an application and interview process. So, the dynamics you describe in the beginning would be somewhat different because of self-selection effects; there isnât a bulletin board or a LinkedIn post where these positions are advertised. I say somewhat because I can imagine a situation where a solely money-driven individual gets highly engaged in the club, learns about the Group Organizer Fellowship, applies, and manages to secure funding. However, I donât expect this to be that likely.
...you are constantly being nudged by your corrupted hardware to justify spending money on luxuries and conveniences.
For group funding, at least, there are strict requirements for what money can and cannot be spent on. This is true for most university EA clubs unless they have an independent funding source.
All that said, I agree that ânotably large amount[s] of moneyâ for university organizers is not ideal.
I disagree-voted and briefly wanted to explain why.
âsome people may want to do good as much as possible but donât buy longtermism. We might lose these people who could do amazing good.â
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I donât think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the âlocal expertâ model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different âexpertsâ spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
âAnd if we give this content of weirdness plus the âmost important centuryâ narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.â
I think assumes that people wonât be put off by the weirdness by, letâs say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I donât know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they donât have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I donât think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I donât think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance, why AI alignment could be hard with modern deep learning, but they donât need to read the 80K career profile on Safety if they donât want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/âlongtermist/âAI jobs on 80K or the EA Opportunity board and question why those topics werenât introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/âupdating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that âhey, you can also work for the government, help shape policy on bio-risks, and have a huge impactâ is an important one not to leave out.
I agree. I was imagining too rigorous (and narrow) of a cause prioritization exercise when commenting.
I donât! I meant to say that students who have mental health concerns may find it harder to do cause prioritization while balancing everything else.
I gather the OP wants something thatâs more just an extension of âdeveloping better ways of thinking and forming opinionsâ about causes, and not quashing peopleâs organic critical reflections about the ideas they encounter.
I was unsure if this is what OP meant; if yes, then I fully agree.
First, I am sorry to hear about your experience. I am sympathetic to the idea that a high level of deference and lack of rigorous thinking is likely rampant amongst the university EA crowd, and I hope this is remedied. That said, I strongly disagree with your takeaways about funding and have some other reflections as well:
âBeing paid to run a college club is weird. All other college students volunteer to run their clubs.â
This seems incorrect. I used to feel this way, but I changed my mind because I noticed that every âseriousâ club (i.e., any club wanting to achieve its goals reliably) on my campus pays students or hires paid interns. For instance, my university has a well-established environmental science ecosystem, and at least two of the associated clubs are supported via some university funding mechanism (this is now so advanced that they also do grantmaking for student projects ranging from a couple thousand to a max of $100,000). I can also think of a few larger Christian groups on campus which do the same. Some computer science/âdata-related clubs also do this, but I might be wrong.
Most college clubs are indeed run on a volunteer basis. But most are run quite casually. There is nothing wrong with this; most of them are hobby-based clubs where students simply want to create a socially welcoming atmosphere for any who might be interested. They donât have weekly discussions, TA-like facilitation, socials/âretreats, or, in some cases, hosting research/âinternship programs. In this way, EA clubs are different because they arenât trying to be the âletâs get together and have funâ club. I almost see university EA clubs as a prototype non-profit or a so-so-funded university department trying to run a few courses.
In passing I should also mention that it is far more common for clubs to get funding for hosting events, outreach, buying materials, etc. My guess is that in these cases if more funding were available, then students running those clubs would also get stipends.
âGetting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid.â
My experience has been the opposite of yours. Before getting paid, organizing felt like a distraction from more important things; there was always this rush to wrap up tasks; I enjoyed organizing but always felt somewhat guilty for spending time on it. These feelings vanished after getting funded. I (at least) doubled the amount of time I spent on the club, got more exposed to EA, got more organized with the meetings/âdeadlines, and I feel that I have a sense of responsibility for running this project the best I can.
Turn the University Group Organizer Fellowship into a need-based fellowship.
I am uncertain about this. I think a better and simpler heuristic is that if people are working diligently for x hours a week, then they should be funded for their labor.
âIf the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing peopleâs commitment to quality community building, then reduce funding...â
I agree with this. Funding being given out could be somewhat reduced and I feel it would be equally as impactful as it is now, but I am keen to see the results of the survey.
âI am very concerned with just how little cause prioritization seems to be happening at my university group.â
At least for university groups, maybe this is the wrong thing to be concerned about. It would be better if students could do rigorous cause-prioritization, but I think for most, this would be quite challenging, if not virtually impossible.
The way I see it, most university students are still in the formative stages of figuring out what they believe in and their reasons for doing so. Almost all are in the active process of developing their identity and goals. Some have certain behavioral traits that prevent them from exploring all options (think of the shy person who later went on to become a communicator of some sort). All this is sometimes exacerbated by mental health problems or practical concerns (familial duties, the need to be financially stable, etc.).
Expecting folks from this age group to perform cause prioritization is a high bar. I am sure some can do it, but I wouldnât have been able to. Instead, I think itâd be better if university EA groups helped their members understand how to make the best possible bet at the moment to have a pathway to impact. For instance, I hope that most students who go through the fellowship:
â Develop better ways of thinking and forming opinions
â Be more open-minded /â have a broad sphere of concern
â Take ideas seriously and act on them (usually by building career capital)
â Play the long game of having a high-impact career
Now, this likely doesnât happen to the best possible degree. But I think that all this and more, in combination, would help most in refining their cause prioritization over the years and setting themselves up to have a rewarding and impactful career.
Maybe this is what you meant when you were expressing your concerns, in which case, sorry for the TED talk and I wholeheartedly agree.
About point 4: While commenting, I presumed the controversial bit was âletâs build bunkers only for EAs.â Reading other comments, however, it seems that maybe I misunderstood something because there is more focus on the âletâs build bunkersâ part and not as much on the latter.
The idea of making bunkers is somewhat out there but not uncommon; governments have done it nationally at least once, and an active group of preppers do it now. In the event of a catastrophe, I would appreciate having access to a bunker, and I am sure so would others.
Making it only for EAs implies (the utterly wrong idea) that in the event of a catastrophe, EAs are somehow more valuable and worthy of saving than non-EAs. This goes against some core ideas that we aim to cultivate.
...whether this is a healthy line of thinking...
Absolutely not healthy!
...and something weâre glad the public knows about us now.
Leave the public! This is something I didnât know about âusâ until now (and plausibly, 99% of the EA community didnât either).
The memo is bad because would-have-been top funders were floating the idea of preferentially helping the in-group (and helping is an understatement here). At the same time, I expect plenty of guilt-by-association critiques to spur out of this that will place blame on the entire community :(
I skimmed through the article; thanks for sharing!
Some quick thoughts:
community-members are fully aware that EA is not actually an open-ended question but a set of conclusions and specific cause areas
The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, letâs say, a hundred million individuals and government /â other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
âł...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsidersâ â The evidence cited here is mostly all the guides that CEA has made, but I donât see how this translates to âhigh level of control.â EAs and EA organizations donât have to adhere to what CEA suggests.
âThe general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits...â â upvotes to a comment â general consensus.
Table 1, especially the Cause neutrality section, seems to wedge a line where one doesnât exist.
The author acknowledges in the Methodology section that they didnât participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
I donât know what convinces the articleâs author to imply that there is some highly coordinated approach to funnel people into the âreal parts of EA.â If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.
Overall, I like the background research done by the author, but I think the authorâs takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the âplandemic,â where there is always a secret group, a âthey,â advancing their agenda while puppeteering tens of thousands of others.
Much more straightforward explanations exist, which arenât entertained in this study.
EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movementâs future.
Thanks for writing this piece! This motivates me to rescue a draft about âhow to eat more plants and do it successfullyâ that has been in the works for too long. Hopefully, I will complete it soon-ish; fingers crossed!
But briefly âHis argument, as I understand it, boils down to the idea that he needs to eat animals in order be fit, strong, and healthy.
I had similar concerns before going vegan. It didnât take me that long to realize that killing, consuming, and using animals the way we do is morally abhorrent. The environmental and public health issues from intensive farming were easier to buy into. But, I was unsure if I could sustain a healthy life and build muscles without eating non-humans.
I was getting into strength training back then, and I really wanted to build muscles and not have a scrawny figure anymore. Nearly all the jacked influencers on social media/âYT promoted a meat-heavy diet; chicken breast and whey protein seemed like the necessary ingredients for getting lean and building muscles; vegan food was often labeled as rabbit food and thoroughly dismissed. Another subset of folks attracted my attention: people who stopped being vegan. The severity of the health problems they claimed they experienced while eating plants was alarming.
All this made me pretty hesitant to adopt a plant-only diet. I wonât spend much space in this comment elaborating on how I escaped the jacked influencer memeplex or what made me skeptical of the alleged severe harms of a plant-based diet, but I am glad I did. In one line â I realized that being buff had little to do with eating or not eating a plant-based diet.
I have been vegan for three years now, and I have been able to:
Build muscles and strength and gain weight
Retain muscles and most of my strength and lose weight
Retain most of my muscles while not exercising at all for months
I am not as jacked as you, but I am in good shape and health and pretty happy about it! At my best, I made tracking calories, nutrient intake, and strength training progress a habit. It seemed like a simple math problem, and the results were pretty deterministic. I think I would have had similar success with a plant-predominant or meat-focused diet.
Overall, I would say my experience has been ânormal,â and I would recommend it to the vast majority of people who want to get bigger or be in better shape.
(N=3 now!)
Hello from another group organizer in the Southwest! We are in Tucson AZ, just a six hours drive away. Hopefully, someday in the not-so-far future, organizing a southwestern meetup /â retreat /â something would be feasible and super cool!
Minor nitpick: âNEOs (objects smaller than asteroids)â
The definition of NEOs here seems wrong. Wouldnât it be more accurate to call them âTiny NEOs?â The current definition makes it sound as if asteroids arenât NEOs, but most NEOs are asteroids.