MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful.
Given that he’s been in touch with people from GiveDirectly for past videos, does anyone know if there’s a line of contact to him or his team? A single video/mention highlighting effective animal charities—like those recommended by Animal Charity Evaluators (e.g. The Humane League, Faunalytics, Good Food Institute)—could reach tens of millions and (potentially) meaningfully shift public perception toward impact-focused giving for animals.
If anyone’s connected or has thoughts on how to coordinate outreach, this seems like a high-leverage opportunity I really have no idea how this sorta stuff works, but it seemed worth a quick take — feel free to lmk if I’m totally off base here).
Yep—Beast Philanthropy actually did an AMA here in the past! My takeaway was that the video comes first, so that your chances of a partnership would greatly increase if you can make it entertaining. This is somewhat in contrast with a lot of EA charities, which are quite boring, but I suspect on the margins you could find something good.
What IMHO worked for GiveDirectly in that video, and for Shrimp Welfare in their public outreach, has been the counterintuitiveness of some of these interventions. Wild animals, cultured meat, shrimp, are more likely to fit in this bucket than corporate campaigns for chickens I reckon.
As Huw says, the video comes first. I think this puts almost anything you’d be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn’t chosen at random. People like charismatic megafauna.
Re the new 2024 Rethink Cause Prio survey: “The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]” 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree.
This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong.
(Charitable interpretation) It’s also just the case that EAs tend to have lots of views that they’re being contrarian about because they’re trying to maximize the the expected value of information (often justified with something like: “usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees”).
If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for “being correct” reasons.
I think the “most topics” thing is ambiguous. There are some topics on which mainstream experts tend to be correct and some on which they’re wrong, and although expertise is valuable on topics experts think about, they might be wrong on most topics central to EA. [1]
In the real world, assuming we have more than five minutes to think about a question, we shouldn’t “defer” to experts or immediately “embrace contrarian views”, rather use their expertise and reject it when appropriate. Since this wasn’t an option in the poll, my guess is many respondents just wrote how much they like being contrarian, and EAs have to often be contrarian on topics they think about so it came out in favor of contrarianism.
[1] Experts can be wrong because they don’t think in probabilities, they have a lack of imagination, there are obvious political incentives to say one thing over another, and probably other reasons, and lots of the central EA questions don’t have actual well-developed scientific fields around them, so many of the “experts” aren’t people who have thought about similar questions in a truth-seeking way for many years
What’s the definition of “truth-seeking”? Not your personal definition, but the pre-existing, canonical definition that’s been written down somewhere and that everyone agrees on.
Not “everyone agrees” what “utilitarianism” means either and it remains a useful word. In context you can tell I mean someone whose attitude, methods and incentives allow them to avoid the biases I listed and others.
If I want to know what “utilitarianism” means, including any disagreements among scholars about the meaning of the term (I have a philosophy degree, I have studied ethics, and I don’t have the impression there are meaningful disagreements among philosophers on the definition of “utilitarianism”), I can find this information in many places, such as:
Academic lectures on YouTube and Crash Course (a high-quality educational resource)
So, it’s easy for me to find out what “utilitarianism” means. There is no shortage of information about that.
Where do I go to find out what “truth-seeking” means? Even if some people disagree on the definition, can I go somewhere and read about, say, the top 3 most popular definitions of the term and why people prefer one definition over the other?
It seems like an important word. I notice people keep using it. So, what does it mean? Where has it been defined? Is there a source you can cite that attempts to define it?
I have tried to find a definition for “truth-seeking” before, more than once. I’ve asked what the definition is before, more than once. I don’t know if there is a definition. I don’t know if the term means anything definite and specific. I imagine it probably doesn’t have a clear definition or meaning, and that different people who say “truth-seeking” mean different things when they say it — and so people are largely talking past each other when they use this term.
Incidentally, I think what I just said about “truth-seeking” probably also largely applies to “epistemics”. I suspect “epistemics” probably either means epistemic practices or epistemology, but it’s not clear, and there is evidently some confusion on its intended meaning. Looking at the actual use of “epistemics”, I’m not sure different people mean the same thing by it.
If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It’s catchy, symbolic, and sounds like it has prestige.
Perhaps this downside could be partly mitigated by expanding the name to make it sound more global or include something Western, for example: Petrov Center for Global Security or Petrov–Perry Institute (in reference to William J. Perry). (Not saying these are the best names.)
For me at least, that implies an institute founded or affiliated with somebody named Petrov, not just inspired by somebody, and it would seem slightly sketchy for it not to be.
While I don’t have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you probably don’t want to decide career moves purely on current headcounts). And remember: perfection is gonna be the enemy of the good here.
Why this matters
Coordination. It’s surprisingly hard to know who’s in these niches (independent researchers, part-timers, new entrants, maybe donors). A simple list would make it easier to find collaborators, talk to the right people, and avoid duplicated work.
Neglectedness clarity. A major reason to work on ultra-neglected causes is… neglectedness. But we often have no real headcount, and that may push people into (or out of) fields they wouldn’t otherwise choose. Even technical AI safety numbers are outdated — the last widely cited 80k estimate (2022) was ~200 people, which is clearly very false now. (To their credit, they emphasized the difficulty and tried to update.)
Even rough FTE (full time equivalent) estimates + who’s active in each area would be a huge service for some fields.
Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention)
MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful.
Given that he’s been in touch with people from GiveDirectly for past videos, does anyone know if there’s a line of contact to him or his team? A single video/mention highlighting effective animal charities—like those recommended by Animal Charity Evaluators (e.g. The Humane League, Faunalytics, Good Food Institute)—could reach tens of millions and (potentially) meaningfully shift public perception toward impact-focused giving for animals.
If anyone’s connected or has thoughts on how to coordinate outreach, this seems like a high-leverage opportunity I really have no idea how this sorta stuff works, but it seemed worth a quick take — feel free to lmk if I’m totally off base here).
Manifesting
Yooo—nice! Seems good and would cost under ~100k.
Agreed, Noah. For 15 k shrimps helped per $, it would cost 9.60 k$ (= 144*10^6/(15*10^3)).
Yep—Beast Philanthropy actually did an AMA here in the past! My takeaway was that the video comes first, so that your chances of a partnership would greatly increase if you can make it entertaining. This is somewhat in contrast with a lot of EA charities, which are quite boring, but I suspect on the margins you could find something good.
What IMHO worked for GiveDirectly in that video, and for Shrimp Welfare in their public outreach, has been the counterintuitiveness of some of these interventions. Wild animals, cultured meat, shrimp, are more likely to fit in this bucket than corporate campaigns for chickens I reckon.
As Huw says, the video comes first. I think this puts almost anything you’d be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn’t chosen at random. People like charismatic megafauna.
Re the new 2024 Rethink Cause Prio survey: “The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]” 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree.
This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong.
(Charitable interpretation) It’s also just the case that EAs tend to have lots of views that they’re being contrarian about because they’re trying to maximize the the expected value of information (often justified with something like: “usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees”).
If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for “being correct” reasons.
Tho lmk if you disagree.
I think the “most topics” thing is ambiguous. There are some topics on which mainstream experts tend to be correct and some on which they’re wrong, and although expertise is valuable on topics experts think about, they might be wrong on most topics central to EA. [1]
In the real world, assuming we have more than five minutes to think about a question, we shouldn’t “defer” to experts or immediately “embrace contrarian views”, rather use their expertise and reject it when appropriate. Since this wasn’t an option in the poll, my guess is many respondents just wrote how much they like being contrarian, and EAs have to often be contrarian on topics they think about so it came out in favor of contrarianism.
[1] Experts can be wrong because they don’t think in probabilities, they have a lack of imagination, there are obvious political incentives to say one thing over another, and probably other reasons, and lots of the central EA questions don’t have actual well-developed scientific fields around them, so many of the “experts” aren’t people who have thought about similar questions in a truth-seeking way for many years
What’s the definition of “truth-seeking”? Not your personal definition, but the pre-existing, canonical definition that’s been written down somewhere and that everyone agrees on.
Not “everyone agrees” what “utilitarianism” means either and it remains a useful word. In context you can tell I mean someone whose attitude, methods and incentives allow them to avoid the biases I listed and others.
If I want to know what “utilitarianism” means, including any disagreements among scholars about the meaning of the term (I have a philosophy degree, I have studied ethics, and I don’t have the impression there are meaningful disagreements among philosophers on the definition of “utilitarianism”), I can find this information in many places, such as:
The Stanford Encyclopedia of Philosophy
Encyclopedia Britannica
Wikipedia
The book Utilitarianism: A Very Short Introduction co-authored by Peter Singer and published by Oxford University Press
A textbook like Normative Ethics or an anthology like Ethical Theory
Philosophy journals
An academic philosophy podcast like Philosophy Bites
Academic lectures on YouTube and Crash Course (a high-quality educational resource)
So, it’s easy for me to find out what “utilitarianism” means. There is no shortage of information about that.
Where do I go to find out what “truth-seeking” means? Even if some people disagree on the definition, can I go somewhere and read about, say, the top 3 most popular definitions of the term and why people prefer one definition over the other?
It seems like an important word. I notice people keep using it. So, what does it mean? Where has it been defined? Is there a source you can cite that attempts to define it?
I have tried to find a definition for “truth-seeking” before, more than once. I’ve asked what the definition is before, more than once. I don’t know if there is a definition. I don’t know if the term means anything definite and specific. I imagine it probably doesn’t have a clear definition or meaning, and that different people who say “truth-seeking” mean different things when they say it — and so people are largely talking past each other when they use this term.
Incidentally, I think what I just said about “truth-seeking” probably also largely applies to “epistemics”. I suspect “epistemics” probably either means epistemic practices or epistemology, but it’s not clear, and there is evidently some confusion on its intended meaning. Looking at the actual use of “epistemics”, I’m not sure different people mean the same thing by it.
Very random but:
If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It’s catchy, symbolic, and sounds like it has prestige.
Unfortunately it also sounds Russian, which has some serious downsides at the moment....
Perhaps this downside could be partly mitigated by expanding the name to make it sound more global or include something Western, for example: Petrov Center for Global Security or Petrov–Perry Institute (in reference to William J. Perry). (Not saying these are the best names.)
For me at least, that implies an institute founded or affiliated with somebody named Petrov, not just inspired by somebody, and it would seem slightly sketchy for it not to be.
Although there is the Alan Turing Institute, Ada Lovelace Institute, Leverhulme Centre, Simon Institute, etc.
Idea for someone with a bit of free time:
While I don’t have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you probably don’t want to decide career moves purely on current headcounts). And remember: perfection is gonna be the enemy of the good here.
Why this matters
Coordination.
It’s surprisingly hard to know who’s in these niches (independent researchers, part-timers, new entrants, maybe donors). A simple list would make it easier to find collaborators, talk to the right people, and avoid duplicated work.
Neglectedness clarity.
A major reason to work on ultra-neglected causes is… neglectedness. But we often have no real headcount, and that may push people into (or out of) fields they wouldn’t otherwise choose. Even technical AI safety numbers are outdated — the last widely cited 80k estimate (2022) was ~200 people, which is clearly very false now. (To their credit, they emphasized the difficulty and tried to update.)
Even rough FTE (full time equivalent) estimates + who’s active in each area would be a huge service for some fields.
Looking back on old 80k podcasts, and this is what I see (lol):
They’re both great episodes, though — relistened to #138 last week :)