I’m a little hesitant to publish this because I don’t think most people should prioritize frugality.
Can you expand on why you think most people shouldn’t prioritize frugality? Do you mean most of the general population, most EAs, or some other group?
I didn’t see this comment earlier. Having read it, this seems like one of the best ideas here and certainly worth trying. I would also be curious to see if there are strong arguments against this idea.
If done well this could be good, but I worry that a concerted effort will most likely come across as fake or insincere and turn out to be a negative.
I don’t think the two reasons for Ben’s actions you suggested are mutually inconsistent. He may want to emotionally reject EA style giving arguments, think of arguments that could justify this, and then get frustrated by what he sees as poor arguments for EA or against his arguments. This outcome (frustration and worry with the EA community’s epistemic health) seems likely to me for someone who starts off emotionally wanting to reject certain arguments. He could also have identified genuine flaws in EA that both make him reject EA and make him frustrated by the epistemic health of EA.
Harry Potter and the Methods of Rationality can be for inspiring an EA-like mood, as well as for introducing the idea of thinking ways that can be helpful for EAs (although some ways of thinking that end up being effectively promoted are anti-EA to varying degrees).
Can you expand on this claim? Do you mean that all research has non-zero bias (but some could be very close to 0 bias), that all research has significant bias towards the hypothesis or framework it’s working in, or something else?
What do you think is the point of the book that SSC missed?
Notably, Jessica says in the Less Wrong comments that “GiveWell is a scam (as reasonable priors in this area would suggest), although I don’t want this to be treated as a public accusation or anything; it’s not like they’re more of a scam than most other things in this general area.”
I do not find her evidence very convincing. Some of it relates to private information which she privately messaged to Jeff Kaufman. The first part of this private information, a rumor relating to GiveWell’s treatment of an ex-employee, was disconfirmed by the person in question according to Jeff. The rest of this private information is advice to talk to specific people and links to public blog posts.
The rest of the evidence seems to center around arguments that international charities like AMF create dependency and apathy, sourced from a YouTube philosophy video creator and apparent worker in international development who cites personal anecdotes and Dambisa Moyo’s book Dead Aid. This person alleges that AMF and other organizations have put the local bed net makers out of business and says that he has personally seen many families that only bring out their bed net when the AMF inspector comes around. Jessica emphasizes further that the strongest section of the video is where the he says that (quoting Jessica) “the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people’s motivation to solve their community’s problems).”
Arguments about dependency and building sustainable institutions instead have been discussed a plenty in EA circles over the years, and I won’t rehash them further here. I just want to note that Moyo says herself that her critique should not be applied to private NGOs, and even aid critics accept that health interventions, like those of most GiveWell top charities, can have positive impact.
I also do not think that, even if the evidence was rock solid, this would mean that GiveWell is a scam; people can be wrong or disagree without it meaning that they’re scamming you or that they’re deluding themselves.
Edit: Cleaned up a couple of sentences
Please do expand this onto a top level post if you are able to!
Another ex-GiveWell’s employee post criticizing GiveWell and the EA community was recently highly upvoted. See also Ben’s old post Effective Altruism is Self-Recommending, which is currently at +30 (a solid amount given that it was posted on the old forum, where karma totals were much lower).
I think the reason this post is at near 0 karma is because it is objectively wrong in multiple ways, and is of negative value. I would say this is clear if you engage with the comments here, on Ben’s blog, and Jeff Kaufman’s reply.
I actually interpret the voting on this post to be too positive. I think it is because EAs tend to be wary of downvoting criticisms that might be good. Ben’s previous reputation for worthwhile criticism seems to be protecting him to a certain extent.
I think people use upvotes both to signal agreement and to highlight thoughtful, effortful, or detailed comments. I think it’s fairly clear that Kbog’s comments was upvoted because people agreed with it, not because people thought it was a particularly insightful comment. That doesn’t preclude people upvoting posts for being high quality.
If your point is more that people don’t generally upvote quality posts that they disagree with, then I would probably agree with that.
Also I do want to say that I appreciate you trying hard to engage with skeptical people and try to figure out independently new promising areas! That’s valuable work for the community, even if this particular intervention doesn’t pan out.
Thanks for the clarification. I also share your model of mental health disorders being on the far end of a continuous spectrum of unendorsed behavior patterns. The crux for me here is more what the effect of psychedelics is on people not at the far end of the spectrum. I agree that it might be positive, it might even be likely to be positive, but I’m not aware of any compelling empirical evidence or other reason to think that it is strong.
I have essentially a mathematical objection, in that I think the math is unlikely to work out, but I don’t have a problem with the idea in principle (putting aside PR risks).
Thanks for linking your thread with Kit in your other reply. I think my objection is very similar to Kit’s. Consider:
Total benefit = effect from boosting efficacy of current long-termist labor (1) + effect from increasing the amount of long-termist labor (2) + effect from short-termist benefits (3)
I expect (1) to be extremely not worth it given the costs of making any substantial improvement in the availability of psychedelics, and (2) to be speculative and to almost certainly not be worth it. By (3), do you mean the mental health benefits for people in general?
My (small) update is also this, except confined to posts criticizing EA.
Whether you think it’s a rationalization or not, the claim in the OP is misleading at best. It sounds like you’re paraphrasing them as saying that they don’t recommend that Good Ventures fully fund their charities because this is an unfair way to save lives. GiveWell says nothing of the sort in the very link you use to back up your claim. The reason the you assign to them instead, that they think that this would be unfair, is absurd and isn’t backed up by anything in the OP.
I found this post interesting overall. I have a few thoughts on the argument as a whole, but want to focus on one thing in particular:
[GiveWell] recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives.
I don’t see this as an accurate summary of the reasons GiveWell outlined in the linked blogpost. The stated reason is that in the long-term, fully funding every strong giving opportunity they see would be counterproductive because their behavior might influence other donors’ behavior:
We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.
Despite this, that year they recommended that Good Ventures fully funds the highest-value opportunities:
For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.
The post itself goes into much greater detail about these considerations.
Argument in OP:
Interventions that increase the set of well-intentioned + capable people also seem quite robust to cluelessness, because they allow for more error correction at each timestep on the way to the far future.
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one’s metaphysical assumptions).
I view this as a weak argument. I think one could make this sort of argument for a large number of interventions: reading great literature, yoga, a huge number of productivity systems, participating in healthy communities, quantified self, volunteering for local charities like working at a soup kitchen, etc. Some of these interventions focus more on the increasing capability aspect (productivity systems, productivity systems) and some focus more on improving intentions (participating in healthy communities, volunteering). Some focus on both to some degree.
The reason it seems like a weak argument to me is because:
(a) the average effects of psychedelics on increasing capability seem unlikely to be strong. They may be high for a small percentage of people, but I’m not aware of any particularly strong reason to think that the average effects are large.
They may be large for people with mental health issues, but then it’s not really an intervention for increasing capability in general, it’s a mental health intervention. These are distinct, and as I said above, psychedelics could plausibly be a top intervention for mental health.
(b) The improving intentions aspect looks to be on even shakier grounds. What is the evidence that taking psychedelics is an effective treatment for improving intentions in a manner relevant to working on the long term? I’ve never heard of any psychedelic or spiritual community being focused on long termism in an EA relevant manner. Some people report ego dissolution, but I’m not even aware of any anecdotal reports that ego dissolution led to non-EAs thinking and working on long term things. It sounds like you know some cases where it may have been helpful, but I’m skeptical that a high quality study would report something amazing.
I don’t have much to contribute beyond the many things that have already been said, but I suspect my overall opinion may be shared by many others: I think psychedelics could plausibly (but not >50%) be a very effective mental health intervention. One could perhaps call them a promising EA intervention, although the evidence base is quite thin at the moment. However psychedelics don’t seem likely to be a particularly effective long term intervention at the moment. They perhaps might be once they were legalized and there was some more evidence behind this, but that seems quite a long way away. Trying to legalize psychedelics or improve research for the long term impacts seems quite implausible as an effective intervention.
Regarding EA weddings, check out the forum thread Suggestions for EA Weddings Vows? from just a couple of months ago.