AppliedDivinityStudies
Do you have a stronger argument for why we should want to future-proof ethics? From the perspective of a conservative Christian born hundreds of years ago, maybe today’s society is very sinful. What would compel them to adopt an attitude such that it isn’t?
Similarly, say in the future we have moral norms that tolerate behavior we currently see as reprehensible. Why would we want to adopt those norms? Should we assume that morality will make monotonic progress, just because we’re repulsed by some past moral norms? That doesn’t seem to follow. In fact, it seems plausible that morality has simply shifted. From the outside view, there’s nothing to differentiate “my morality is better than past morality” from “my morality is different than past morality, but not in any way that makes it obviously superior”.
You can imagine, for example, a future with sexual norms we would today consider reprehensible. Is there any reason I should want to adopt them?
One candidate you don’t mention is:
- Extrapolate from past moral progress to make educated guesses about where moral norms will be in the future.
On a somewhat generous interpretation, this is the strategy social justice advocates have been using. You look historically, see that we were wrong about treating women, minorities, etc less worthy of moral consideration, and try to guess which currently subjugated groups will in the future be seen as worthy of equal treatment. This gets you to feeling more concern for trans people, people with different sexual preferences (including ones that are currently still taboo), for poor people, disabled people, etc, and eventually maybe animals too.
Another way of phrasing that is:
- Identify which groups will be raised in moral status in the future, and work proactively to raise their status today.
Will MacAskill has an 80k podcast titled “Our descendants will probably see us as moral monsters”. One way to interpret the modern social justice movement is that it advocates for adopting a speculative future ethics, such that we see each other as moral monsters today. This has led to mixed results.
EA Organization Updates: January 2022
If you read the expert comments, very often they complain that the question is poorly phrased. It’s typically about wording like “would greatly increase” where there’s not even an attempt to define “greatly”. So if you want to improve the panel or replicate it, that is my #1 recommendation.
...My #2 recommendation is to create a Metaculus market for every IGM question and see how it compares.
At what level of payoff is that bet worth it? Lets say the bet is a 50⁄50 triple-or-nothing bet. So, either EA ends up with half its money, or ends up with double. I’d guess (based on not much) that right now losing 50% of EA’s money is more negative than doubling EA’s money is positive.
There is an actual correct answer, at least the abstract. According to the Kelly criterion, on a 50⁄50 triple-or-nothing bet, you should put down 25% of your bankroll.
Say EA is now at around 50⁄50 Crypto/non-Crypto, what kind of returns would justify that allocation? At 50⁄50 odds, there’s actually no multiple that makes the math work out.
But that’s just for the strict case we’re discussing. See the section on “Investment formula” for what to do about partial losses.
Finally, instead of a 50⁄50 triple-or-nothing bet, we can model this as a 75⁄25 double-or-nothing bet (same EV as you bet). In that case, we get that a 50⁄50 allocation is optimal.
But note that the Kelly criterion is optimizing for log(wealth)! Log(wealth) approximates utility in individuals, but not in aggregate. Since EA is trying to give all its money away, the marginal returns slope off much more gradually. (See some very rough estimates here.) If you’re just optimizing for wealth, you would be okay with a riskier allocation.
BTW, it’s not just “over-invested in X”, you have to think about the entire portfolio. So given that almost all EA money is either Sam or Dustin, you have to consider the correlation between Crypto and FB stock.
I’ll also add that you have to consider all future EA money in determining what % of the bankroll we’re using.
It doesn’t really matter though, since EA doesn’t “own” or “control” Sam’s wealth in any meaningful way.
People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory.
A while back I looked into using lard and/or bacon in otherwise vegan cooking. The idea being that you could use a fairly small amount of animal product to great gastronomical effect. One way to think about this is to consider whether you would prefer:
A: Rice and lentils with a tablespoon of bacon
B: Rice with 0.25lb ground beef
I did the math on this, and it works out surprisingly poorly for lard. You’re consuming 1/8th as much mass, which sounds good, except that by some measures, producing pig induces 4x as much suffering as producing beef per unit of mass. So it’s a modest 2x gain, but nothing revolutionary.
On the other hand, the math works out really favorably for butter. Using that same linked analysis, if you can replace 100g beef with lentils fried in 10g butter, you’re inducing ~150x less suffering.
One upshot of this is that almost all the harm averted by consuming vegan baked goods instead of conventional ones is from avoiding the eggs, rather than the butter. So I would really love to see a “veganish” bakeshop that uses butter but not eggs.
The tension between overconfidence and rigorous thinking is overrated:
Swisher: Do you take criticism to heart correctly?
Elon: Yes.
Swisher: Give me an example of something if you could.
Elon: How do you think rockets get to orbit?
Swisher: That’s a fair point.
Elon: Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up.
Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.
Swisher: Right. And therefore?
Elon: I have a strong interest in the truth.
Okay sorry, maybe I’m having a stroke and don’t understand. The original phrasing and new phrasing look identical to me.
Oh wait, did you already edit the original comment? If not I might have misread it.
EA Organization Updates: December 2021
I agree that it’s pretty likely octopi are morally relevant, though we should distinguish between “30% likelihood of moral relevance” and “moral weight relative to a human”.
I don’t have anything substantive to add, but this is really really sad to hear. Thanks for sharing.
The wrong tool for many.… Some people accomplish a lot of good by being overconfident.
But Holden, rationalists should win. If you can do good by being overconfident, then bayesian habits can and should endorse overconfidence.
Since “The Bayesian Mindset” broadly construed is all about calibrating confidence, that might sound like a contradiction, but it shouldn’t. Overconfidence is an attitude, not an epistemic state.
~50% of Open Phil spending is on global health, animal welfare, criminal justice reform, and other “shortermist” and egalitarian causes.
This is their recent writeup on one piece of how they think about disbursing funds now vs later https://www.openphilanthropy.org/blog/2021-allocation-givewell-top-charities-why-we-re-giving-more-going-forward
This perspective strikes me as as extremely low agentiness.
Donors aren’t this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It’s nobody’s job right now, but it could be yours.
[Linkpost] Alexander Berger On Philanthrophic Opportunities And Normal Awesome Altruism
Sure, but outside of OpenPhil, GiveWell is the vast majority of EA spending right?
Not a grant-making organization, but as another example, the Rethink Priorities report on Charter Cities seemed fairly “traditional EA” style analysis.
There’s a list of winners here, but I’m not sure how you would judge counterfactual impact. With a lot of these, it’s difficult to demonstrate that the grantee would have been unable to do their work without the grant.
At the very least, I think Alexey was fairly poor when he received the grant and would have had to get a day job otherwise.
This is a good idea, but I think you mind find that there’s surprisingly little EA consensus. What’s the likelihood that this is the most important century? Should we be funding near-term health treatments for the global poor, or does nothing really matter aside from AI Safety? Is the right ethics utilitarian? Person-affecting? Should you even be a moral realist?
As far as I can tell, EAs (meaning both the general population of uni club attendees and EA Forum readers, alongside the “EA elite” who hold positions of influence at top EA orgs) disagree substantially amongst themselves on all of these really fundamental and critical issues.
What EAs really seems to have in common is an interest in doing the most good, thinking seriously and critically about what that entails, and then actually taking those ideas seriously and executing. As Helen once put it, Effective Altruism is a question, not an ideology.
So I think this could be valuable in theory, but I don’t think your off-the-cuff examples do a good job of illustrating the potential here. For pretty much everything you list, I’m pretty confident that many EAs already disagree, and that these are not actually matters of group-think or even local consensus.
Finally, I think there are questions which are tricky to red-team because of how much conversation around them is private, undocumented, or otherwise obscured. So if you were conducting this exercise, I don’t think it would make sense as an entry-level thing, I think you would have to find people who are already fairly knowledgeable.