This comment is extremely good. I wish I could incorporate some of it into my comment since it hits the cognitive dissonance aspect far better than I did. Itâs near impossible to give significant moral weight to animals and still think it is okay to eat them.
Marcus Abramovitch đ¸
I think a lot of commenters are taking the âmaximizeâ bit too literally. EAs are a bit on the neurotic side and like to take things literally, but colloquially, people understand that maximize doesnât mean maximize at all other costs. I agree that maximization is perilous but in every day language, with which every day people we are trying to appeal to communicate, âmaximizeâ doesnât mean to do so at all costs like maximizing a single function. When my basketball coach would tell me to score as many points as possible, I took it as a given he didnât think I should hold the referees and other team at gunpoint until they allowed me to score points easily or do any number of other ridiculous actions. When a friend tells me to come as early as I can, they donât mean for me to floor the gas pedal from my current location.
A pledge summed up in a single sentence isnât going to have all the caveats and asterisks that EAs like to have when they speak precisely.
Can you maybe expand a bit more on why? I found out about EA when I was 23 and I wish I found out about it when I was perhaps 16â17 and perhaps earlier. Itâs obviously hard to know but I think I would have made better and different choices on career path, study, etc.; so itâs advantageous to learn about EA earlier in life despite being far from making direct impact.
I also suspect though correct me if Iâm wrong, behind point 1 is an assumption that EA is bad for peopleâs personal welfare. I donât know if this is true.
I listed in descending order of importance. I might be confused for one of those âhyper rationalistâ types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just donât value non human animals. Even rationalists behave in a much more âvibesâ based way than theyâd have you believe. It really is hard to hold in your head both âitâs okay to eat animalsâ and âwe can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do soâ.
I also wouldnât call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.
My reasons for being vegan have little to do with the direct negative effects of factory farming. They are in roughly descending order of importance.
A constant reminder to myself that non-human animals matter. My current day-to-day activities give nearly no reason to think about the fact that non-human animals have moral worth. This is my 2-5 times per day reminder of this fact.
Reduction of cognitive dissonance. It took about a year of being vegan to begin to appreciate, viscerally, that animals had moral worth. Itâs hard to quantify this but it is tough to think that animals have moral worth when you eat them a few times a day. This has flow-through effects on donations, cause prioritization, etc.
The effect it has on others. Iâm not a pushy vegan at all. I hardly tell people but every now and then people notice and ask questions about it.
Solidarity with non-EAA animal welfare people. For better or worse, outside of EA, this seems to be a ticket to entry to be considered taking the issue seriously. I want to be able to convince them to donate to THL over a pet shelter and to SWP over dog rescue charities and the the EA AWF over Pets for Vets. They are more likely to listen to me when they see me as one of them who just happens to be doing the math.
Reducing the daily suffering that I cause. Itâs still something even though it pales in comparison to my yearly donations but it is me living in accordance with my values and is causing less suffering than I would otherwise.
I basically think so, yes. I think it mainly caused by, as you put it, âthe amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropyâ and therefore people have scaled back/âstopped since they donât think itâs impactful. I basically donât think thatâs true, especially in this case of animal welfare but also just in terms of absolute impact which is what actually matters as opposed to relative impact. FWIW, this is the same (IMO, fallacious) argument ânormiesâ have against donating âmy potential donations are so small compared to billionaires/âgovernments/âNGOs that I may as well just spend it on myselfâ.
But yes, the amount of people I know who would consider themselves to be effective altruists, even committed effective altruists who earn considerable salaries donate relatively little, at least compared to what they could be donating.
Iâll take a crack at some of these.
On 3, I basically donât think this matters. I hadnât considered it largely because it seems super irrelevant. It matters far more if any individual people shouldnât be there or some individuals should be there who arenât. AFAICT without much digging, they all seem to be doing a fine job and I donât see the need for a male/âpoc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. Itâs a giant waste of time, energy and other resources
On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? Itâs always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I canât remember if this is Scotts or Irishmen or another group)
On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).
On 10, good point, I would like to see some movement within EA to increase the intensity.
On 11, another good point. Iâd love to read more about this.
On 12, another good point but this is somewhat how networks work, unfortunately. Thereâs just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.
@Greg_Colbourn while I disagree on Pause AI and the beliefs that lead up to it, I want to commend you for this for:
1) Taking your beliefs seriously.2) Actually donating significant amounts. I donât know how this sort of fell off as a thing EAs do.
Unfortunately, a lot of the organizations listed are very cheap. For example, I donât want to be too confident but I think that Arthropoda is going to have <$200k nearly certainly.
Actually, Iâm uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their âp(doom)â and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. Itâs as if both of them arenât doing any/âenough reading of history. In the case of my tribe
I would submit that this kind of protesting, including/âespecially the example you posted makes your cause seem dumb/âunnuanced/âridiculous to the onlookers who are indifferent/âknow little.
Last, I was just responding to the prompt âWhat are some criticisms of PauseAI?â. Itâs not exactly the place for a âfair and balanced viewâ but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
Correct, I potentially misremembered. the actual things they definitely say, at least in this video are âopen ai sucks! Anthropic sucks! Mistral sucks!â And âDemis Hassabis, reckless! DarĂo amodei recklessâ
I would submit that I am at the very least directionally correct.
I donât think there is a need for me to show the relationship here.
2â3. https://ââyoutu.be/ââT-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched âpause ai protestâ on youtube. In it, the chant things like âopen ai sucks! Anthropic sucks! Mistral sucks!â And âDemis Hassabis, reckless! DarĂo amodei recklessâ
I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. Thatâs what doing work looks like.
This seems to be what a typical protest looks like. Iâve seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. Iâll let others form their opinions.
Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. Thatâs what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I donât know what the ideal policies are but it doesnât seem like a âpauseâ with no other asks is the best one.
Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
Pause AI, the organization, does, frankly, juvenile stunts that make EA/âAI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesnât just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
Pause AIâs premise is very âdoomyâ and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/âwonât work and what good policies are. The Pause AI movement is very âsoldierâ mindset and not âscoutâ mindset.
Iâd like to see far more of EAâs budget be going towards animal welfare, in particular to the most numerous and neglected beings, invertebrates. This puts Arthropoda above SWP at the top since the former is more neglected and urgent. After that, the EA AWF has good insight to the EAA movementâs needs and Rethink Priorities functions as a public good for the EA community and I think have shown themselves to be worthy of having more discretion in their budget given their incredibly impressive track record.
Apart from that, I find the case for FWI and WAI and LIC to be convincing. Iâm a bit unsure of LICâs impact and thus it ranks behind FWI and WAI.
I find the Gaetz and Hegeseth picks to be a bit worrying. I struggle to find a reason that the Gabbard is bad at all. In fact, I think she is probably good? Sheâs a former congresswoman, city councillor, hawaii house rep and member of the national guard, etc. She seems like a good pick who is concerned about the US tendency to intervene in foreign countries.
Now, to be clear, I find the Gaetz and Hegeseth picks to be bad but I thought Trump would do these types of things and I think there is a whole universe of things that Trump could have done and so he did some mildly-moderately bad ones.
So, he did some bad things but it was around expectation and nothing yet in the tails and thus I shouldnât update in the direction of totalitarianism.
Iâm still not finding anything to really be alarmed about other than people I know being alarmed.
This seems to overstate how important the ea forum is
My problem with this is that itâs not falsifiable.
If Iâm willing to bet, I need to take âedgeâ. I am not going to bet at my actual odds since that gives no profit for me.
1â2. I think nearly every president committed crimes, for example, war crimes. This mainly depends on what he is prosecuted for as opposed to what is committed.
-
If the constitution is amended that seems fine. Iâm fine to bet on something like this though.
-
Iâm not sure why that matters. People can elect people you and I disagree with ideologically.
-
I donât think I understand this one. Can you clarify?
I feel like people are converting their dislike of Trump into unwarranted fears. I donât like Trump but itâs not helpful to fear monger.
-
Sure, we donât have to bet at 50â50 odds. Iâm willing to bet at say 90â10 odds in your favor that the next election is decided by electoral college or popular vote with a (relatively) free and fair election comparable to 2016, 2020 and 2024.
I agree that Trump is⌠bad for lack of a better word and that he seeks loyalty and such. But the US democracy is rather robust and somehow people took the fact that it held up strongly as evidence that⌠democracy was more fragile than we thought.
Itâs not about telling others Iâm vegan. Itâs about telling them that I think non human animals are worthy of moral consideration. I also tell people that I donate to animal welfare charities and even which ones.