AppliedDivinityStudies
A while back I looked into using lard and/or bacon in otherwise vegan cooking. The idea being that you could use a fairly small amount of animal product to great gastronomical effect. One way to think about this is to consider whether you would prefer:
A: Rice and lentils with a tablespoon of bacon
B: Rice with 0.25lb ground beef
I did the math on this, and it works out surprisingly poorly for lard. You’re consuming 1/8th as much mass, which sounds good, except that by some measures, producing pig induces 4x as much suffering as producing beef per unit of mass. So it’s a modest 2x gain, but nothing revolutionary.
On the other hand, the math works out really favorably for butter. Using that same linked analysis, if you can replace 100g beef with lentils fried in 10g butter, you’re inducing ~150x less suffering.
One upshot of this is that almost all the harm averted by consuming vegan baked goods instead of conventional ones is from avoiding the eggs, rather than the butter. So I would really love to see a “veganish” bakeshop that uses butter but not eggs.
The tension between overconfidence and rigorous thinking is overrated:
Swisher: Do you take criticism to heart correctly?
Elon: Yes.
Swisher: Give me an example of something if you could.
Elon: How do you think rockets get to orbit?
Swisher: That’s a fair point.
Elon: Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up.
Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.
Swisher: Right. And therefore?
Elon: I have a strong interest in the truth.
Okay sorry, maybe I’m having a stroke and don’t understand. The original phrasing and new phrasing look identical to me.
Oh wait, did you already edit the original comment? If not I might have misread it.
I agree that it’s pretty likely octopi are morally relevant, though we should distinguish between “30% likelihood of moral relevance” and “moral weight relative to a human”.
I don’t have anything substantive to add, but this is really really sad to hear. Thanks for sharing.
The wrong tool for many.… Some people accomplish a lot of good by being overconfident.
But Holden, rationalists should win. If you can do good by being overconfident, then bayesian habits can and should endorse overconfidence.
Since “The Bayesian Mindset” broadly construed is all about calibrating confidence, that might sound like a contradiction, but it shouldn’t. Overconfidence is an attitude, not an epistemic state.
~50% of Open Phil spending is on global health, animal welfare, criminal justice reform, and other “shortermist” and egalitarian causes.
This is their recent writeup on one piece of how they think about disbursing funds now vs later https://www.openphilanthropy.org/blog/2021-allocation-givewell-top-charities-why-we-re-giving-more-going-forward
This perspective strikes me as as extremely low agentiness.
Donors aren’t this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It’s nobody’s job right now, but it could be yours.
Sure, but outside of OpenPhil, GiveWell is the vast majority of EA spending right?
Not a grant-making organization, but as another example, the Rethink Priorities report on Charter Cities seemed fairly “traditional EA” style analysis.
There’s a list of winners here, but I’m not sure how you would judge counterfactual impact. With a lot of these, it’s difficult to demonstrate that the grantee would have been unable to do their work without the grant.
At the very least, I think Alexey was fairly poor when he received the grant and would have had to get a day job otherwise.
I think the framing of good grantmaking as “spotting great opportunities early” is precisely how EA gets beat.
Fast Grants seems to have been hugely impactful for a fairly small amount of money, the trick is that the grantees weren’t even asking, there was no institution to give no, and no cost-effectiveness estimate to run. It’s a somewhat more entrepreneurial approach to grantmaking. It’s not that EA thought it wasn’t very promising, it’s that EA didn’t even see the opportunity.
I think it’s worth noting that a ton of OpenPhil’s portfolio would score really poorly along conventional EA metrics. They argue as much in this piece. So of course the community collectively gets credit because OpenPhil identifies as EA, but it’s worth noting that their “hits based giving” approach divers substantially from more conventional EA-style (quantitative QALY/cost-effectiveness) analysis and asking what that should mean for the movement more generally.
Saying “I’d rather die than live like that” is distinct from “this is worse than non-existence.” Can you clarify?
Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible—for both NKs and SKs alike. I don’t know what they would find implausible. To me it seems plausible.
I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so. I don’t have good intuitions on this, it doesn’t seem absurd to me.
Unrelated to NK, many people suffer immensely from terminal illnesses, but we still deny them the right to assisted suicide. For very good reasons, we have extremely strong biases against actively killing people, even when their lives are clearly net negative.
So yes, I think it’s plausible that many humans living in extreme poverty or under totalitarian regimes are experiencing extremely negative net utility, and under some ethical systems, that implies that it would be a net good to let them die.
That doesn’t mean we should promote policies that kill North Korean people or stop giving humanitarian food and medical aid.
EA has consensus on shockingly few big questions. I would argue that not coming to widespread agreement is the norm for this community.
Think about:
neartermism v.s. longtermism
GiveWell style CEAs v.s. Open Phil style explicitly non-transparent hits-based giving
Total Utilitarianism v.s. Suffering-focused Ethics
Priors on the hinge-of-history hypothesis
Moral Realism
These are all incredibly important and central to a lot of EA work, but as far as I’ve seen, there isn’t strong consensus.
I would describe the working solution as some combination of:
Pursuing different avenues in parallel
Having different institutions act in accordance with different worldviews
Focusing on work that’s robust to worldview diversification
Anyway, that’s all to say, you’re right, and this is an important question to make progress on, but it’s not really surprising that there isn’t consensus.
I think I see the confusion.
No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).
Uhh, I’m not sure if I’m misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.
I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.
I don’t think I’m the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There’s probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.
Okay, as I understand the discussion so far:
The RP authors said they were concerned about PR risk from a leftist critique
I wrote this post, explaining how I think those concerns could more productively be addressed
You asked, why I’m focusing on Leftist Ethics in particular
I replied, because I haven’t seen authors cite concerns about PR risk stemming from other kinds of critique
That’s all my comment was meant to illustrate, I think I pretty much agree with your initial comment.
People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory.