I am a third-year grad student, now studying Information Science, and I am hoping to pursue full-time roles in technical AI Safety from June â25 onwards. I am spending my last semester at school working on an AI evaluations project and pair programming through the ARENA curriculum with a few others from my university. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
Career-wise, I am broadly interested in x/âs-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
Another place people could be directed for career advice: https://ââprobablygood.org/ââ
Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.
This semester, we will have two 1-on-1s
The first one will be a casual conversation where the mentee-mentor get to learn more about each other
The second one will be more in-depth, where we will share this 1-on-1 sheet (shamelessly poached from the 80K), the mentees will fill it out before the meeting, have a â¤1 hour long conversation with a mentor of their choice, and post-meeting, the mentor will add further resources to the sheet that may be helpful.
The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:
â someone is curious about EA/âadjacent causes
â someone needs graduate school related questions
â general âhow to best navigate college, plan for internships, etcâ advice
Do yâall have something similar set up?
Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities.
Upvoted and I endorse everything in the article barring the following:
> If you are reasonably confident that what you are doing is the most effective thing you can do, then it doesnât matter if it fully solves any problem
I think most people in playpump-like non-profits and most individuals who are doing something feel reasonably confident that their actions are as effective as they could be. Prioritization is not taken seriously, likely because most havenât entertained the idea that differences in impact might be huge between the median and the most impactful interventions. On a personal level, I think it is more likely than not that people often underestimate their potential, are too risk-averse, and do not sufficiently explore all the actions they could be taking and all the ways their beliefs may be wrong.IMO, even if you are âreasonably confident that what you are doing is the most effective thing you can do,â it is still worth exploring and entertaining alternative actions that you could take.
From the perspective of someone who thinks AI progress is real and might happen quickly over the next decade, I am happy about this update. Barring Ezra Klein and the Kevin guy from NYT, the majority of mainstream media publications are not taking AI progress seriously, so hopefully this brings some balance to the information ecosystem.
From the perspective of âwhat does this mean for the future of the EA movement,â I feel somewhat negatively about this update. Non-AIS people within EA are already dissatisfied by the amount of attention, talent, and resources that are dedicated to AIS, and I believe this will only heighten that feeling.
I love this write up. Re point 2 â I sincerely think we are in the golden age of media, at least in ~developed nations. There has never been a time where any random person could make music, write up their ideas, or shoot an independent film and make a living out of it! The barrier to entry is so much lower, and there are typically no unreasonable restrictions on the type of media we can create (I am sure medieval churches wouldnât be fans of heavy metal). If we donât mess up our shared future, all this will only get better.
Also, I feel this should have been a full post and not a quick note.
At Anthropicâs new valuation, each of its seven founders â CEO Dario Amodei, president Daniela Amodei and cofounders Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish and Christopher Olah â are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropicâs equity each, meaning their net worths are at least $1.2 billion.
I donât know if any of the seven co-founders practice effective giving, but if they do, this is welcoming news!
(Tangential but related) There is probably a strong case to be made for recruiting the help of EA sympathetic celebrities to promote effective giving, and maybe even raise funds. I am a bit hesitant about âcause promotionâ by celebrities, but maybe some version of that idea is also defensible. Turns out, someone wrote about it on the Forum a few years ago, but I donât know how much subsequent discussion there has been on this topic since then.
I donât disagree. I was simply airing my suspicion that most group organizers who applied for the OP fellowship did so because they thought something akin to âI will be organizing for 8-20 hours a week and I want to be incentivized for doing soâ â which is perfectly a-ok and a valid reason â rather than âI am applying to the fellowship as I will not be able to sustain myself without the funding.â
In cases where people need to make trade-offs between taking some random university job vs. organizing part time, assuming that they are genuinely interested in organizing and that the university has potential, I think it would be valuable for them to get funding.
Random idea: a yearly community retreat or a mini-conference for EtG folks?
I would be interested to see what proportion of group organizer request funding primarily due to difficult financial situations. My guess would be that this number is fairly small, but I could be wrong.
I agree with so much here.
I have my responses to the question you raised: âSo why do I feel inclined to double down on effective altruism rather than move onto other endeavours?â
I have doubled down a lot over the last ~1.5 years. I am not at all shy about being an EA; it is even on my LinkedIn!
This is partly because of integrity and honesty reasons. Yes, I care about animals and AI and like math and rationality and whatnot. All this is a part of who I am.
Funnily enough, a non-negligible reason why I have doubled down (and am more pro-EA than before) is the sheer quantity of not-so-good critiques. And they keep publishing them.
Another reason is because there are bizarre caricatures of EAs out there. No, we are not robotic utility maximizers. In my personal interactions, when people hopefully realize that âokay this is a just another feel-y human with a bunch of interests who happens to be vegan and feels strongly about donations.â
âI have personally benefited massively in achieving my own goals.â â I hope this experience is more common!
I feel EA/âadjacent community epistemics have enormously improved my mental health and decision-making; being in the larger EA-sphere has improved my view of life; I have more agency; I am much more open to newer ideas, even those I vehemently disagree with; I am much more sympathetic to value and normative pluralism than before!
I wish more ever day EAs were louder about their EA-ness.
Related Q: is there a list of EA media project that you would like to see more of but ones that currently do not exist?
I honestly donât know. When I think of an arms race, I typically think of rapid manufacturing and accumulation of âweapons.â
Do you think export controls between two countries are a sufficient condition for an arms race?
I donât disagree with this at all. But does this mean that blame can be attributed to the entire EA community? I think not.
Re mentorship/âfunding: I doubt that his mentors were hoping that he would accelerate the chances of an arms race conflict. As a corollary, I am sure nukes wouldnât have been developed if the physics community in the 1930s didnât exist or mentored different people or adopted better ethical norms. Even if they did the latter, it is unclear if that would have prevented the creation of the bomb.
(I found your comments under Ben Westâs posts insightful; if true, it highlights a divergence between the beliefs of the broader EA community and certain influential EAs in DC and AI policy circles.)
Currently, it is just a report, and I hope it stays that way.
And we contributed to this.
What makes you say this? I agree that it is likely that Aschenbrennerâs report was influential here, but did we make Aschenbrenner write chapter IIId of Situational Awareness the way he did?
But the background work predates Leopoldâs involvement.
Is there some background EA/âaligned work that argues for an arms race? Because the consensus seems to be against starting a great power war.
Which software/âapplication did you use to create these visualizations?
âbut could be significant if the average American were to replace the majority of their meat consumption with soy-based products.â
Could you elaborate how you conclude that the effects of soy isoflavones could be significant if consumption were higher?
I read this summary article from the Linus Pauling institute a while ago and concluded, âokay, isoflavones donât seem like an issue at all, and in some cases might have health benefitsâ (and this matches my experience so far).[1] The relevant section from the article:
Male reproductive health
Claims that soy food/âisoflavone consumption can have adverse effects on male reproductive function, including feminization, erectile dysfunction, and infertility, are primarily based on animal studies and case reports (181). Exposure to isoflavones (including at levels above typical Asian dietary intakes) has not been shown to affect either the concentrations of estrogen and testosterone, or the quality of sperm and semen (181, 182). Thorough reviews of the literature found no basis for concern but emphasized the need for long-term, large scale comprehensive human studies (181, 183).
Unless there is some new piece of information that fairly moderately/âstrongly suggests that isoflavones do have feminizing effects, this seems like a non-issue.
- ^
A personal anecdote, not that it bears much weight, I have been consuming >15 ounces of tofu and >250 ml of soy milk nearly every day for the last four years, and I have noticed how âfeminineâ or âmasculineâ my body looks is almost entirely dependent on how much weight I lift in a week and my nutritional intake, rather than my soy intake.
- ^
A few quick pushbacks/âquestions:
I donât think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
âGoing all-in on animal welfare at the expense of global development seems bad for the movement.â â I donât think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isnât $100 million a mere fraction of the yearly GHD budget?
Causing unnecessary suffering is morally bad. Causing intense unnecessary suffering is morally worse.
Non-humans have the capacity to physically and psychologically suffer. The intensity of suffering they can experience is non-negligible, and plausibly, not that far off from that of humans. Non-humans have a dispreference towards being in such states of agony.
Non-human individuals are in constant and often intense states of agony in farmed settings. They also live short lives, sometimes less than 1/â10th of their natural lifespan, which leads to loss of welfare they would have experienced if they were allowed to live till old age.
The scale of farmed animal suffering is enormous beyond comprehension; if we only consider land animals, it is around 100 billion; if crustaceans and fish are included, the number is close to 1000 billion; if insects are accounted for, then the number is in several 1000s of billions. Nearly all of these animals have lives not worth living.
The total dollar spent per unit of suffering experienced is arguably more than a thousand times lower for non-humans compared to humans. This seems unreasonable given the vast number of individuals who suffer in farmed settings. Doing a quick and dirty calculation, and only considering OpenPhil funding, we get ~$1 spent per human and ~0.0003 spent per non-human individual. Including non-EA funding into this estimation would make the discrepancy worser.
We are nowhere close to reducing the amount of non-humans in farmed settings. Meat consumption is predicted to rise by 50% in the next three decades, which would drastically increase the number of farmed animals living short, agony-filled lives. We also havenât yet had a breakthrough in cultivated meat, and if the Humbird report is to be believed, we should be skeptical of any such breakthroughs in the near future (if anything, we are seeing the first wave of cultivated meat bans, which may delay the transition to animal-free products).
Reducing farm animal suffering, via policy, advocacy, and development of alternative proteins, is tractable and solvable (for the last one in the list, we may need moonshot projects, which may imply raising even more funding).
Therefore, the additional $100 million is better spent on animal welfare than global health.
From the update, it seems that:
80Kâs career guide will remain unchanged
I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career
n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
If non-EAs also find 80Kâs career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
Existing write-ups on non-AI problem profiles will also remain unchanged
There will be a separate AGI career guide
But the job board will be more AI focused
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in âmaking AI go wellâ or mid/âsenior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.