Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
What is maxevas? Couldn’t find anything relevant by googling.
Hope I’m not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.
(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it’s invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)
stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID
I’m out of the loop, who’s this allegedly EA person who works at DOGE?
The idea of haggling doesn’t sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative.
Counterpoint: some people are more price-sensitive than typical consumers, and really can’t afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)
I think EA’s have the mental strength to handle diverse political views well.
No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don’t. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.
If you don’t aim to persuade anyone else to agree with your moral framework and take action along with you, you’re not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don’t care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
embrace of the “Meat-Eater Problem” inbuilt into both the EA Community and its core ideas
Embrace of the meat-eater problem is not built into the EA community. I’m guessing a large majority of EAs, especially the less engaged ones who don’t comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Can you give specifics? Any crucial considerations that EA is not considering or under-weighting?
I actually found it more persuasive that buying broilers from a reformed scenario seems to get you both a reduction in pain and a more climate-positive outcome
How did you conclude that? How are the broilers reformed to not be painful?
Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.
Now that the election is over, I’d love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.
I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.
But one of the key differences between EA/LT and these fields is that we’re almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn’t be very high. Under that assumption, the work done is indeed very different in what it accomplishes.
I don’t know what you mean by fields only looking into regional disasters—how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
I’m skeptical that the insurance industry isn’t bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don’t agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I’m not aware of, but I doubt it.)
I saw a lot of criticism of the EA approach to x-risks on the grounds that we’re just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we’re supposedly reinventing, and they weren’t the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).
I went to a large event, and the organizers counted the number of attendees present and then ordered chicken for everyone’s meal. Unfortunately I didn’t have a chance to request a vegetarian alternative. What’s the most efficient way to offset my portion of the animal welfare harm, and how much will it cost? I’m looking for information such as “XYZ is the current best charity for reducing animal suffering, and saves chickens for $xx each”, but I’m open to donating to something that helps other animals—doesn’t necessarily have to be chickens, if I can offset the harm more effectively or do more good per dollar elsewhere.
Humans are just lexically worth more than animals? You would torture a million puppies for a century to protect me from stubbing my toe?
Reducing animal agriculture for the benefits to humans by reducing habitat destruction is a really roundabout and ineffective way to help humans.
If you want to help humans, you should do whatever most helps humans.
If you want to protect someone from climate change, you should do whatever most effectively mitigates the effects of climate change.
If you want to help animals for the sake of helping animals, you should do that.
But you shouldn’t decide that helping animals is better than helping humans on the grounds that helping animals also indirectly helps humans.
Animal welfare is extremely neglected compared to human philanthropy. (However, effective interventions receive only a small fraction of altruistic funding intended to help humans.)
I’m highly uncertain about counterfactuals and higher-order effects, such as changes in long-term human population and eating patterns due to accelerated global economic development.
Risk aversion doesn’t change the best outcome from donating to a single charity to splitting your donation, once you account for the fact that many other people are already donating to both charities.
Given that both orgs already have many other donors, the best action for you to take is to give all of your donations to just one of the options (unless you are a very large donor).
a portfolio approach does more good given uncertainty about the moral weight on animals
No, this is totally wrong. Whatever your distribution of credences of different possible moral weights of animals, either the global health charity or the animal welfare charity will do more good than the other, and splitting your donations will do less good than donating all to the single better charity.
Ah yes I get it now. Thanks!