What does EA/tech mean? EA-related tech?
smountjoy
The value of cost-effectiveness analysis (David Thorstad)
Thanks! FWIW, I completely agree with your framing. In my head the question was about debate (“did FTX look sketchy enough that we should’ve seen big debates about it on the forum”) and I should’ve made that explicit. Sounds like the majority answer so far is yes, it did look that bad. My impression is also the same as yours that those debates did not happen.
[Question] What should we have thought about FTX’s business practices?
My (possibly wrong) understanding of what Eliezer is saying:
FTX ought to have responded internally to the conflict of interest, but they had no obligation to disclose it externally (to Future Fund staff or wider EA community).
The failure in FTX was that they did not implement the right internal controls—not that the relationship was “hidden from investors and other stakeholders.”
If EA leadership and FTX investors made a mistake, it was failing to ensure that FTX had implemented the right internal controls—not failing to know about the relationship.
Thank you!
Great idea!
Jump on a Zoom Call once a week with a carefully chosen peer for 1:1s and a group of 5-8 like-minded EAs with the same goal
Is this a group program, or one-on-one, or some of each? Is the “carefully chosen peer” matched with you for all 4–8 weeks?
What type or granularity of goal are you referring to?
Oops, thank you! Not sure what I was thinking. Fixed now.
Overall agreed, except that I’m not sure the idea of patient longtermism does anything to defend longtermism against Aron’s criticism? By my reading of Aron’s post, the assumptions there are that people in the future will have a lot of wealth to deal with problems of their time, compared to what we have now—which would make investing resources for the future (patient longtermism) less effective than spending them right away.
I think your point is broadly valid, Aron: if we knew that the future would get richer and more altruistically-minded as you describe, then we would want to focus most of our resources on helping people in the present.
But if we’re even a little unsure—say, there’s just a 1% chance that the future is not rich and altruistic—then we might still have very strong reason to put our resources toward making the future better: because the future is (in expectation) so big, if there’s anything at all we can do to influence it, that could be very important.
And to me it seems pretty clear that the chance of a bad future is quite a bit more than 1%, which further strengthens the case.
Wow, I’m glad I noticed Vegan Nutrition in among the winners. Many thanks to Elizabeth for writing, and I hope it will eventually appear as a post. A few months ago I spent some time looking around the forum for exactly this and gave up—in hindsight, I should’ve been asking why it didn’t exist!
[Question] Which organizations are looking for funding from small donors?
I’m starting to think there’s no possible question for which Will can’t come up with an answer that’s true, useful, and crowd-pleasing. We’re lucky to have him!
You might be interested in these posts by Nate Soares:
They explore how we should act given that some things “cannot be known ahead of time, not even approximated.”
If it does not serve any useful purpose, then why focus on longtermism?
I think you’re right that we can make a good case for increased spending on nuclear safety, pandemic preparedness, and AI safety without appeal to longtermism. But here’s one useful purpose of longtermism: only the longtermist arguments suggest that those causes are overwhelmingly important; and because of the longtermist arguments, we have many talented people are working zealously to solve those issues—people who would otherwise be working on other things.
Obviously this doesn’t address your concern that longtermism is incorrect; it’s merely a reason why, if longtermism is correct, it’s a useful thing to talk about.
Reasons Not to Trade Money for Time
Agreed. The first big barrier to putting self-modification into practice is “how do you do it”; the second big barrier is “how do you prove to others that you’ve done it.” I’m not sure why the authors don’t discuss these two issues more.
On how to actually self-modify/self-deceive, all they say is that it might involve “leaning into and/or refraining from over-riding common-sense moral intuitions”. But that doesn’t explain how to make the change irrevocably (which is the crucial step).
On how to demonstrate self-modification to others, they mention a “society of peers where one’s internal motivations are somewhat transparent to others.” I agree that our motivations are in general somewhat transparent—but are they transparent in this particular case, the case of differentiating between between a deontologist and a consequentialist-leaning-into-common-sense-morality-in-order-to-be-more-trustworthy?
Maybe so. For instance, maybe the deontologist naturally reacts to side-constraint violations with strong emotion, believing that they are intrinsically bad—but the consequentialist naturally reacts with less emotion, believing that the violation is neither good nor bad intrinsically, but instrumentally bad through [long chain of reasoning]. And maybe the emotional response is hard to fake.
So when someone lies to you, if you get angry—rather than exhibiting calculated disapproval—maybe that’s weak evidence that you have an intrinsic aversion to lying.
Thanks for writing! It sounds like part of your pitch is that there are some types of therapy which are much more effective than the types in common use. Scott’s book review of all therapy books makes me pretty pessimistic about that. If you’ve read that post, do you have any thoughts?
Hi Sarah! I broadly agree with the post, but I do think there’s a marginal value argument against becoming a doctor that doesn’t apply to working at EA orgs. Namely:
Suppose I’m roughly as good at being a doctor as the next-doctor-up. My choosing to become a doctor brings about situation A over situation B:
Situation A: I’m a doctor, next-doctor-up goes to their backup plan
Situation B: next-doctor-up is a doctor, I go to my backup planSince we’re equally good doctors, the only difference is in whose backup plan is better—so I should prefer situation B, in which I don’t become a doctor, as long as I think my backup plan will do more good than their backup plan. This seems likely to be the case for anyone strongly motivated to do good, including EAs.
To make a similar case against working at an EA org, you would have to believe that your backup plan is significantly better than other EAs’ backup plans.
EDIT: I should say I agree it’s possible that friction in applying for EA jobs could outweigh any chance you have of being better than the next candidate. Just saying I think the argument against becoming a doctor is different—and stronger, because there are bigger gains on the table.
I had the opposite takeaway from the podcast. Ajeya and Rob definitely don’t come to a confident conclusion. Near the end of the segment, Ajeya says, referring definitely to the simulation argument but also I think to anthropics generally,
I would definitely be interested in funding people who want to think about this. I think it is really deeply neglected. It might be the most neglected global prioritisation question relative to its importance. There’s at least two people thinking about AI timelines, but zero people [thinking about simulation/anthropics], basically. Except for Paul in his spare time, I guess.
Oops, thank you! I thought I had selected linkpost, but maybe I unselected without noticing. Fixed!