The guy in the panda hat at EAG
Cornelis Dirk Haupt
Thank you for the perspective!
I certainly agree with your model on behaviour change. Likewise, my approach has over the years simplified from more convoluted ideas to one simple maxim: “Just make sure you feed them. The rest will often take care of itself.”
I’m concerned about animal welfare, human welfare and AI safety—without the urgency of AI dominating entirely.
I think what I highlight is similar to how many professional communities are optimized for matching prospective employers with employees rather than the happiness and enjoyment of their members. If there are 100 members but employers are only interested in one candidate you will have 99 less happy members. But this is not a bad thing as the goal of the community is to matching particular employers. It could easily be a mistake to find different employers and different events to make it more likely that you’ll have more happy members—risks include value drift and reducing your actual goal of maximizing impact. Still, 99% of your members are disgruntled as a tradeoff.
Professional-adjacent communities like say “computer tinkerers who just do it for fun” do not have this problem. If 99% in the community are not happy then you either change what you are doing to what the community of tinkerers are interested in or the community ceases to exist—or at least this is a much more likely outcome.
I’ve been thinking a bunch about a fundamental difference between the EA community and the LessWrong community.
LessWrong is optimized for the enjoyment of its members. Any LessWrong event I go to in any city the focus is on “what will we find fun to do?” This is great. Notice how the community isn’t optimized for “making the world more rational.” It is a community that selects for people interested in rationality and then when you get these kinds of people in the same room the community tries to optimize for FUN for these kinds of people.
EA as a community is NOT optimized for the enjoyment of its members. It is optimized for making the world a better place. This is a feature, not a bug. And surely it should be net positive since its goal should by definition be net positive. When planning an EAG or EA event you measure it on impact and say professional connections made and how many new high quality AI Alignment researchers you might have created on the margin. You don’t measure it on how much people enjoyed themselves (or you do, but for instrumental reasons to get more people to come so that you can continue to have impact).
As a community organizer in both spaces, I do notice it is easier that I can leave EA events I organized feeling more burnt out and less fulfilled than compared to similar LW/ACX events. I think the fundamental difference mentioned before explains why.
Dunno if I am pointing at anything that resonates with anyone. I don’t see this discussed much among community organizers. Seems important to highlight.
Basically in LW/ACX spaces—specifically as an organizer—I more easily feel like a fellow traveller up for a good time. In EA spaces—specifically as an organizer—I more easily feel like an unpaid recruiter.
Cornelis Dirk Haupt’s Quick takes
By all means please do
This change is, in part, a response to common feedback that the name Impactful Animal Advocacy is too long (10 syllabus!), hard to remember, and difficult to recognize as the acronym IAA.
I just tried and failed to remember the name last night when I was trying to recommend IAA to a friend of mine interested in getting involved in animal welfare.Thankfully I was quickly able to say “Oh, they’re called Hive now, here’s their Slack invite link!” and all was well.
Shrugs, sure it’s possible. It’s also possible that if we employ counterfactual reasoning that had the UN not existed that a better institution would have arisen in its place. It is quite possible that the dynamics of post-WW2 just made it inevitable for some coordination-institution to be built out of sheer geopolitical necessity and that we got one of the worse possible outcomes.
If the US medical system didn’t get created in its current form that doesn’t mean that counterfactually what would have happened otherwise is that the US would just have no medical system whatsoever. Nobody seriously defends the US medical system by saying it is “better than nothing” because a world where something like it doesn’t exist at all is practically impossible—probably much like a world without something resembling the UN. Too many social, economic and political forces demand that both exist in some shape or form.
Of course you could say the exact same thing about Effective Altruism as well. Had EA not been created in its current form something—counterfactually—with a better foundation might have been culturally constructed. I suppose the difference for me is that it is probably orders of magnitude easier for me to picture a better US medical system or better UN that could have been constructed instead than it is for me to picture a better EA. Maybe this is a failure of imagination on my part.
Anyway, this game of “if this-thing-I-like-had-not-existed” is a fool’s errand and strongly susceptible to motivated reasoning. And that is true whether we do or do not employ counterfactual reasoning.
There are few organizations in the Western world that could survive with the allegations of mismanagement, scandal, and corruption that permeate the United Nations. For many delegates, officials, and employees, particularly those from developing nations, the UN is little more than an enormous watering hole.
Concerned about its shabby image, the UN recently developed a multiple-choice “ethics quiz” for its employees. The “correct” answers were obvious to everyone [Is it all right to steal from your employer? (A) Yes, (B) No, (C) Only if you don’t get caught].
The quiz was not designed to determine the ethical sense of UN employees or to weed out the ethically inept but to raise their level of integrity. How taking a transparent test could improve integrity is unclear. There has been no mention of how management and other officials did on the test
~ Snakes in Suits, a study of psychopaths in the workplaceAre there many EAs that consider the UN a serious institution from a “makes the world a better place” perspective? I thought most of us viewed it the same way we view the US medical system: which is to say woefully ineffective, credentialist, in some cases net-negative for public health and something that is ripe for systemic change to make the world better (It would be interesting to see how many “systemic change” criticisms of EA could apply just as well, if not more, to the UN).
That said, you do have a point. I still haven’t heard a pro-Israeli argument that properly parses the whole anti-Israel UN position. The most salient answer to me is still “Israel is actually in the wrong for a lot of things.” Otherwise surely the UN would be a tad bit more split on the issue?
I just wouldn’t place quite as much stock as you do in the UN. Same goes for the US medical system. Get multiple opinions. Always. Including from those from within the system that argue the entire system has systemic flaws (e.g. vegan doctors that face opposition from practically their entire field). The overall UN position is one signal among many, but it isn’t that strong of a signal.
since your ilk would just want to commit a slow genocide while ignoring it.
There are multiple atrocities of similar moral urgency happening in Northern India, Ethiopia, Sudan, Myanmar and elsewhere that are still being ignored. The world has being paying disproportionate attention to the Palestine-Israeli compared to these other places. I’ve read of Indian reporters flying to Palestine to cover the way and Indians are asking “why are they leaving when there are just as bad things happening at home.” Well, because the world doesn’t care about other parts of the world. It isn’t newsworthy.
Obviously this doesn’t make ignoring Palestine justified. I’m just pointing out that anyone ignoring Palestine might just be actually focusing on something more important. There are a million things on fire in the world. We have to triage. Sometimes that looks like some people not caring when a genocide is happening but sometimes that does not mean they don’t care and it is incredibly uncharitable, rude and presumptuous to say what you did. How you feel about others and who they actually are, are two different things.
This was all extremely clear, as Scott Ritter clearly points out. Also Hamas literally spelled out their plans in documents like Jericho Wall.
It doesn’t matter what Hamas planned. It matters what they did.
If you are Muslim this concept is rooted in the Hadith, where it’s stated that actions are judged by intentions, but the ultimate value lies in the action itself. Any Muslim EA can feel free to tell me I’m wrong. I lived in the middle east for 3 years so I know a thing or two but not much. But this seems like an obvious moral truth all religions and secular moral institutions have at their core.
There was friendly fire which caused many civilian deaths, and possibly the majority of them. Please do some basic research.
There is not a single credible source I can find that says this—including sources highly critical of Israel. Even the Palestinian Authority has taken back their claim that friendly fire from Israeli helicopters caused a whole lot of friendly-fire deaths.
Incredible how the Palestinians crimes are so exaggerated, while all of the unending horrors from the Zionist side are either downplayed or ignored.
Exaggerated how exactly? I said Hamas, not Palestine. Those are two different things just like Israel and the Knesset and Zionists are three different things.
“Resistance Raid” is a bizarre framing of deliberately targeting and slaughtering defenceless women and children in their homes with the deliberate goal of mass terror.
Unlike say the ANC from my home country of South Africa that deliberately tried to only target government targets… that is clearly not what Hamas did. They aren’t freedom fighters, maybe some are, but not their organisation as a whole. Any support for the organisation—given what their charter said pre-2017 - can under no reasonable lens not be seen as tantamount to, at the very least, be supporting ex-Nazis insofar as explicit genocidal antisemitism is concerned. What reasonable counterargument justifying support for Hamas is there that isn’t “Israel is much worse”?
I do not understand why it is so hard for some people to comprehend that both the IDF and Hamas can be net-negative and evil. You don’t have to support the one you judge as the lesser evil and use euphemisms to describe their actions. You can oppose both and say both are savagely genocidal against the other.
“You claim responding against the emotional propaganda is wrong, but writing even close to the parallel from the Palestinian side would result in a perma-ban.”
I don’t believe this is the true given the contentious posts I’ve seen here over the years. I presume you have evidence of someone who is Palestinian and identifies as an EA that was perma-banned for writing from the Palestinian side? (i.e. not a political bot, someone who is actually part of the community) Because I’d be just as interested in reading that as I was reading this piece. And I wouldn’t be putting the two against each other, but be extending empathy to both authors as fellow human beings.
Also during the Oct 7th raid we know Israel killed many of it’s own civilians and it was a highly planned out military operation. If that’s a “terrorist” attack then what israel is doing is even worse than a genocide.
I had to do a double-take and am now only rereading this part after writing my response. You actually believe Israel deliberately perpetuated part of the Oct 7 raid? I’m at a complete loss for words...
Robin Hanson—the guy that came up with the grabby aliens hypothesis that seems to have solidified itself within the EA-rat zeitgeist—also has some very interesting and fun ideas on what UAPs might be—some that actually answer some of your questions:
https://www.overcomingbias.com/p/ufos-what-the-hellhtml
https://www.overcomingbias.com/p/ufos-what-the-further-hell
I’m surprised I don’t see his blog cited anywhere by you or mentioned anywhere in the comments.
Given it is the Giving Season, I’d be remiss not to point out that ACE currently has donation matching for their Recommended Charity Fund.
I am personally waiting to hear back from RC Forward on whether Canadian donations can also be made for said donation matching, but for American EAs at least, this seems like a great no-brainer opportunity to dip your feet in effective animal welfare giving.
The forum has a thing where people with more karma have more upvote/downvote power (at least this was a thing last year. I presume it still is).
This means that even though you got −14 in minutes, that might just be 2 people downvoting in total.
Worth keeping in mind.
Someone else feel free to point out I am mistaken if I am indeed mistaken.
If you’re an animal welfare EA I’d highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).
Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs—namely people arguing in good faith about deep deep disagreements (e.g. why don’t we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement… but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.
Also, one of my favourite things about the IAA community is they don’t ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you’ll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.
Meta-note as a casual lurker in this thread: This comment being down-voted to oblivion while Jason’s comment is not, is pretty bizarre to me. The only explanation I can think of is that people who have provided criticism think Michael is saying they shouldn’t criticise? It is blatantly obvious to me that this is not what he is saying and is simply agreeing with Jason that specific actionable-criticism is better.
Fun meta-meta note I just realized after writing the above: This does mean I am potentially criticising some critics who are critical of how Micheal is criticising their criticism.
Okkkk, that’s enough internet for me. Peace and love, y’all.
I think another useful question to ask could be something like, “what is your fantasy partner/complement organization?”
This part here is where my eyes widened. Adding this as standard question on EA grants is, in hindsight, so obviously a good idea to me that I am kinda in shock we don’t do so already.Creating a group of EA free agents that can be allocated/rented to EA-aligned non-profits?
Actually, this already exists I believe! I know there is a website called “EA Services” that allows you to sign up to basically be allocated around EA/EA-aligned orgs. Can anyone link the website? I’ve lost the URL.
I’d like to note that it is totally possible for someone to sincerely be talking about “cause-first EA” and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I’ve lost track of how many times people I’ve introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of… mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn’t ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles—something they can derive value from regardless of their ability to contribute to AI Alignment—and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the “cause-first” model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you’re not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they’re not going to feel like they’re part of your cause-first movement if they don’t feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like “EA is my tribe.” Ergo, less tension.
A “cause first” movement has similar risks in vesting too much authority into a small elite, not much unlike a cult that comes together and supports each other and believes in some common goal and makes major strides to get closer to said goal, but ultimately burns out as cults often do due to treating their members too instrumentally as objects for the good of the cause. Fast and furious without the staying power of a religion.
That said, I’m also partial to the cause first approach, but man, stuff we have learnt like Oli Habryka’s podcast here made me strongly update more towards a member-first mindset which I think would have more firmly pushed against such revelations as being antithetical to caring for one’s members. Less deference and more thinking for yourself like Oli did seems like a better long-term strategy for any community’s long-term flourishing. EA’s recent wins don’t seem to counteract this intuition of mine strongly enough when you think decades or even generations into the future.That said, if AI timelines really are really short, maybe we just need a fast and furious approach for now.
But we have the same uncertainty with retail meat-based cat food, which I’ve highlighted is quite distinct from what cats evolved on.
Actually, I think we don’t have the same uncertainty. Those products have been iterated on for a far longer time than vegan cat food—including multiple FDA recalls as you pointed out. We’ve had much more of a “trial-by-fire” of retail meat-based cat food over a longer period of time.Though in the other comment you pointed out Ami, which given it has existed for 20 years, I imagine has gone through the same trial-by-fire. A new post that does nothing but focus on the evidence that Ami is fine for your cat would probably convince a ton more people. As I mentioned in my other comment I’m very confused why Ami wasn’t used in the Domínguez-Oliva et al. Study instead.
I don’t understand the obeisance to molecularly-exact meat.
I’m not interested in molecularly-exact meat. I’m interested in what—via strong empirical evidence—we know wont harm my cat.
Our goals with domestic cats are different than what evolution optimized for.
Couldn’t agree more, which is why, if we get enough empirical evidence that some particular vegan meal will be ay-ok for cats I’m all aboard.
It is worth adding that I do think we have enough empirical evidence to place dogs on a vegan diet without issue. But my read of the study is we’re not there with cats yet. I really don’t understand why the study authors make the same conclusion for both cats and dogs. The evidence appears to clearly be vastly stronger for dogs than it is for cats.
We should not put meat on a pedestal and beeline for that.
We should put empirical evidence on a pedestal and while truth-seeking be neutral about whether that includes or excludes meat.
Based on what? I don’t intuit this at all.
For me: I agreed with you and felt like my mind was being changed to being pro-vegan-cat—until I read Elizabeth’s comment pointed out the issues in the study. So for me it is mostly because you haven’t engaged with that specific comment and pointed out why the concerns that are highlighted in her screenshots (from the actual study!) are not something that I need to worry about.
Convince Elizabeth and you, by proxy, convince me I’m pretty sure.
The most parsimonious explanation is that the lack of supplements was the problem, not the “vegan”-ness.
Sounds reasonable to me. I didn’t say that a lack of supplementation wouldn’t solve it. I argued that meat would. Arguing for X doesn’t mean I argued for ~Y.
The study came out January of this year. That’s pretty recent.
Does a nutritionally complete vegan cat food exist yet that takes everything learnt from this study and all the studies it references into account without need for additional supplementation? If yes, I’d want to see a study where cats are fed it first before I place my own cats exclusively on it. Till then I’d probably be too paranoid to feed them a fully vegan diet.Why is that diet representative of for example nutritionally complete Ami, which has been around for years? Isn’t it much better to just defer to AAFCO’s and FDA’s standards, which Ami meets?
I’m confused. By “that diet” you mean to say the diet that was tested in the actual study you use as support for your claims should not be taken as an example of something nutritionally complete?
Ok, after trying to figure out what “Ami” was I see in your post you refer to it as vegan cat food that exists on the market.
Apparently it has also been around for 20 years after a quick Google search. Now I’m just hyper-confused why Ami wasn’t used in the Domínguez-Oliva et al. Study instead.
- EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by 28 Sep 2023 23:30 UTC; 320 points) (LessWrong;
- EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by 29 Sep 2023 4:04 UTC; 123 points) (
- 12 May 2023 23:28 UTC; 7 points) 's comment on Getting Cats Vegan is Possible and Imperative by (
Peter Turchin. He was the first guest on Julia Galef’s Rationally Speaking podcast and Scott Alexander did an article on his work. But outside of that I doubt he even knows EA as a movement exists. Would love to see him understand AI timelines and see how that influences his thinking and his models and vice-versa how respected members of our community make updates (or don’t) to their timelines based on Turchin’s models (and why).