I am a third-year grad student, now studying Information Science, and I am hoping to pursue full-time roles in technical AI Safety from June ā25 onwards. I am spending my last semester at school working on an AI evaluations project and pair programming through the ARENA curriculum with a few others from my university. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism ā the EA university group at the University of Arizona. If you are a movement builder, letās get in touch!
Career-wise, I am broadly interested in x/ās-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash šø
I donāt disagree with this at all. But does this mean that blame can be attributed to the entire EA community? I think not.
Re mentorship/āfunding: I doubt that his mentors were hoping that he would accelerate the chances of an arms race conflict. As a corollary, I am sure nukes wouldnāt have been developed if the physics community in the 1930s didnāt exist or mentored different people or adopted better ethical norms. Even if they did the latter, it is unclear if that would have prevented the creation of the bomb.
(I found your comments under Ben Westās posts insightful; if true, it highlights a divergence between the beliefs of the broader EA community and certain influential EAs in DC and AI policy circles.)
Currently, it is just a report, and I hope it stays that way.
And we contributed to this.
What makes you say this? I agree that it is likely that Aschenbrennerās report was influential here, but did we make Aschenbrenner write chapter IIId of Situational Awareness the way he did?
But the background work predates Leopoldās involvement.
Is there some background EA/āaligned work that argues for an arms race? Because the consensus seems to be against starting a great power war.
Which software/āapplication did you use to create these visualizations?
ābut could be significant if the average American were to replace the majority of their meat consumption with soy-based products.ā
Could you elaborate how you conclude that the effects of soy isoflavones could be significant if consumption were higher?
I read this summary article from the Linus Pauling institute a while ago and concluded, āokay, isoflavones donāt seem like an issue at all, and in some cases might have health benefitsā (and this matches my experience so far).[1] The relevant section from the article:
Male reproductive health
Claims that soy food/āisoflavone consumption can have adverse effects on male reproductive function, including feminization, erectile dysfunction, and infertility, are primarily based on animal studies and case reports (181). Exposure to isoflavones (including at levels above typical Asian dietary intakes) has not been shown to affect either the concentrations of estrogen and testosterone, or the quality of sperm and semen (181, 182). Thorough reviews of the literature found no basis for concern but emphasized the need for long-term, large scale comprehensive human studies (181, 183).
Unless there is some new piece of information that fairly moderately/āstrongly suggests that isoflavones do have feminizing effects, this seems like a non-issue.
- ^
A personal anecdote, not that it bears much weight, I have been consuming >15 ounces of tofu and >250 ml of soy milk nearly every day for the last four years, and I have noticed how āfeminineā or āmasculineā my body looks is almost entirely dependent on how much weight I lift in a week and my nutritional intake, rather than my soy intake.
- ^
A few quick pushbacks/āquestions:
I donāt think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
āGoing all-in on animal welfare at the expense of global development seems bad for the movement.ā ā I donāt think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isnāt $100 million a mere fraction of the yearly GHD budget?
Causing unnecessary suffering is morally bad. Causing intense unnecessary suffering is morally worse.
Non-humans have the capacity to physically and psychologically suffer. The intensity of suffering they can experience is non-negligible, and plausibly, not that far off from that of humans. Non-humans have a dispreference towards being in such states of agony.
Non-human individuals are in constant and often intense states of agony in farmed settings. They also live short lives, sometimes less than 1/ā10th of their natural lifespan, which leads to loss of welfare they would have experienced if they were allowed to live till old age.
The scale of farmed animal suffering is enormous beyond comprehension; if we only consider land animals, it is around 100 billion; if crustaceans and fish are included, the number is close to 1000 billion; if insects are accounted for, then the number is in several 1000s of billions. Nearly all of these animals have lives not worth living.
The total dollar spent per unit of suffering experienced is arguably more than a thousand times lower for non-humans compared to humans. This seems unreasonable given the vast number of individuals who suffer in farmed settings. Doing a quick and dirty calculation, and only considering OpenPhil funding, we get ~$1 spent per human and ~0.0003 spent per non-human individual. Including non-EA funding into this estimation would make the discrepancy worser.
We are nowhere close to reducing the amount of non-humans in farmed settings. Meat consumption is predicted to rise by 50% in the next three decades, which would drastically increase the number of farmed animals living short, agony-filled lives. We also havenāt yet had a breakthrough in cultivated meat, and if the Humbird report is to be believed, we should be skeptical of any such breakthroughs in the near future (if anything, we are seeing the first wave of cultivated meat bans, which may delay the transition to animal-free products).
Reducing farm animal suffering, via policy, advocacy, and development of alternative proteins, is tractable and solvable (for the last one in the list, we may need moonshot projects, which may imply raising even more funding).
Therefore, the additional $100 million is better spent on animal welfare than global health.
This was an April Foolsā Day post, so it shouldnāt be taken that seriously!
I think that a human being in a constant blissful state might endanger someoneās existence or make them non-functional
But if pure suffering elimination was the only thing that mattered, no one would be endangered, right? I am guessing there are some other factors you account for when valuing human lives?
which isnāt much of an issue for a farm animal.
I suspect we share very different ethical intuitions about the intrinsic value of non-human lives.
But even from an amoral perspective, this would be an issue because if a substantial number of engineered chickens pecked each other to death (which happens even now), it would reduce profitability and uptake of this method.
The second-order considerations are definitely a problem once there is more widespread adoption. If only 0.001% of the population is using genetic enhancement, there are very little in the way of collective action problems.
I partially agree, but even a couple of malevolent actor who enhance themselves considerably could cause large amounts of trouble. See this section of Reducing long-term risks from malevolent actors.
If it is indeed possible to modify animal minds to such an extent that we would be 100% certain that previously displeasing experiences are now blissful, then couldnāt we extend this logic and āsolveā every single problem? Like, making starvation and poverty and disease and extinction blissful as well?
I feel there are crucial moral and practical (e.g., 2nd order effects) considerations to account for here.
Fascinating ā skimmed his wikipedia and this video, and I think he is 100% serious. He even wrote a paper with Sandberg and Roache arguing the same.
I posted this because it is an inside joke at our university group, but I appreciate that some professional philosophers have given it a more serious treatment.
Such rich literature! I think the major flaw in their methodology is lack of coordinated, incremental scaling (which seems to be the reason why the test subject faced quite a bit of trouble). That said, it still reinforces the arguments of the proposal above, so thank you for sharing these!
Tiny huĀmans: the most promisĀing new cause canĀdiĀdate?
I was skeptical, and then I saw the menu.
If Dustin wants to further diversify his investment portfolio, this might be a great choice.
David Nashās Monthly Overload of Effective Altruism seems highly underrated, and you should most probably give it a follow.
I donāt think any other newsletter captures and highlights EAās cause-neutral impartial beneficence better than the Monthly Overload of EA. For example, this monthās newsletter has updates about Conferences, Virtual Events, Meta-EA, Effective Giving, Global Health and Development, Careers, Animal Welfare, Organization updates, Grants, Biosecurity, Emissions & CO2 Removal, Environment, AI Safety, AI Governance, AI in China, Improving Institutions, Progress, Innovation & Metascience, Longtermism, Forecasting, Miscellaneous causes and links, Stories & EA Around the World, Good News, and more. Compiling all this must be hard work!
Until September 2022, the monthly overloads were also posted on the Forum and received higher engagement than the Substack. I find the posts super informative, so I am giving the newsletter a shout-out and putting it back on everyoneās radar!
What do you think is the reason behind such a major growth? What are they doing differently that GWWC or other EA orgs could adopt?
I think it would have been better if you distilled your responses; much of the 80K career sheet is trying to guide you towards next steps and clarify your priorities and preferences, so the initial set of questions may be kind of redundant. The post right now is kind of hard to parse.
If I had to guess, this may be the reason behind the downvotes, although I am unsure.
I see somewhere around 4-6 career directions right now. Since you have a few years of financial runaway and since you stated that āExploration. I donāt know what Iām going to do as a career,ā it might be worth meticulously planning out the next 6-12 months to explore the different options you are considering.
SWE: do you have prior coding experience? If yes, how did you like programming and how good were you at it? If not, then have you checked short programs which will help you learn the basics of programming quickly and also gauge if you enjoy and are adept at it?
Being a SWE is more than being a programmer, but programming is a necessary first step.
Safety: Are you interested in Technical safety? If yes, do you enjoy programming, math, and research to a considerable degree? Are you also open to policy/āgovernance roles? What about being an operations person involved in a Safety org?
Journalism: Do you have prior experience with and enjoy research and writing? If not, maybe writing some sample pieces and getting feedback from friends/āstrangers who will be blunt about the quality and depth of your writing would help.
Landlord/āpersonal trainer/āpsychology: These might be the easiest for you given your financial situation and because you already have relevant work experience. That said, since effective giving will be your primary pathway to impact in this case:
It would be worth spending lots of time learning about effective giving,
Choosing which cause/āinterventions you want to donate to, and
Maximizing the amount of money you can donate.
How did I miss this update? Either way, thank you for sharing!
What happened to US Policy Careers?
They had several in-depth, informative articles. Shame if they are off the Forum and there is no way to access them.
I honestly donāt know. When I think of an arms race, I typically think of rapid manufacturing and accumulation of āweapons.ā
Do you think export controls between two countries are a sufficient condition for an arms race?