You are right that a lot of people believing something doesn’t make it true, but I don’t think that’s what the OP is suggesting. Rather, if a lot of EAs believe enlightenment is possible and reduces suffering, it is strange that they don’t explore it further. I would suggest that your attitude is the reason why. To label it religious, and religion as the antithesis of empirical evidence, is problematic in its on right, but in any case there is plenty of secular interest in this topic, and plenty of empirical research on it. It is also worth considering that the strength of the case for an enlightened future for humanity (once we strip that term of some of the flights of fancy associated with it), is on par with that of humanity’s possible enslavement by AGI. If the latter is worth our time, why isn’t the former?
UriKatz
With regards to the 3rd point above, most of these studies compare meditation, not enlightenment, to other mental health interventions. Their finding that meditation is no better than CBT is not a negative. Since there is no “one size fit all” psychotherapy, having more options should be a net positive for mental health. Also, if meditation practice can lead to something more, even if that thing is not the end of all suffering, and even if it is rare, that increases the value of meditation practice.
I applaud you for writing this post.
There is a huge difference between statement (a): “AI is more dangerous than nuclear war”, and statement (b): “we should, as a last resort, use nuclear weapons to stop AI”. It is irresponsible to downplay the danger and horror of (b) by claiming Yudkowsky is merely displaying intellectual honesty by making explicit what treaty enforcement entails (not the least because everyone studying or working on international treaties is already aware of this, and is willing to discuss it openly). Yudkowsky is making a clear and precise declaration of what he is willing to do, if necessary. To see this, one only needs to consider the opposite position, statement (c): “we should not start nuclear war over AI under any circumstance”. Statement (c) can reasonably be included in an international treaty dealing with this problem, without that treaty loosing all enforceability. There are plenty of other enforcement mechanisms. Finally, the last thing anyone defending Yudkowsky can claim is that there is a low probability we will need to use nuclear weapons. There is a higher probability of AI research continuing, than of AI research leading to human annihilation. Yudkowsky is gambling that by threatening the use of force he will prevent a catastrophe, but there is every reason to believe his threats increase the chances of a similarly devastating catastrophe.
[Question] Crowdfunding to Rescue FTX Grants
It seems to me that no amount of arguments in support of individual assumptions, or a set of assumptions taken together, can make their repugnant conclusions more correct or palatable. It is as if Frege’s response to Russel’s paradox were to write a book exalting the virtues of set theory. Utility monsters and utility legions show us that there is a problem either with human rationality or human moral intuitions. If not them than the repugnant conclusion does for sure, and it is an outcome of the same assumptions and same reasoning. Personally, I refuse to bite the bullet here which is why I am hesitant to call myself a utilitarian. If I had to bet, I would say the problem lies with assumption 2. People cannot be reduced to numbers either when trying to describe their behavior or trying to guide it. Appealing to an “ideal” doesn’t help, because the ideal is actually a deformed version. An ideal human might have no knowledge gaps, no bias, no calculation errors, etc. but why would their well being be reducible to a function?
(note that I do not dispute that from these assumptions Harsanyi’s Aggregation Theorem can be proven)
the quest for an other-centered ethics leads naturally to utilitarian-flavored systems with a number of controversial implications.
This seems incorrect. Rather, it is your 4 assumptions that “lead naturally” to utilitarianism. It would not be hard for a deontologist to be other-focused simply by emphasizing the a-priori normative duties that are directed towards others (I am thinking here of Kant’s duties matrix: perfect / imperfect & towards self / towards others). The argument can even be made, and often is, that the duties that one has towards one’s self are meant to allow one to benefit others (i.e. skill development). If by other-focused you mean abstracting from one’s personal preferences, values, culture and so forth, deontology might be the better choice, since its use of a-priori reasoning places it behind the veil of ignorance by default.
Only read the TL;DR and the conclusion, but I was wondering why the link between jhana meditation and brain activity matters? Even if we assume materialism, the Path in its various forms (I am intimately familiar with the Buddhist one) always includes other steps, and only taken together do they lead to increased happiness and mental health. My thinking is that we should go in one of two direction: direct manipulation of the brain, or a holistic spiritual approach. This middle way, ironically, seems to leave out the best of both worlds.
I am responding to the newer version of this critique found [here] (https://www.radicalphilosophy.com/article/against-effective-altruism).
Someone needs to steel man Crary’s critique for me, because as it stands I find it very weak. The way I understand this article:
-
The institutional critique—Basically claims 2 things: a) EA’s are searching for their keys only under the lamppost. This is a great warning for anyone doing quantitate research and evaluation. EA’s are well aware of it and try to overcome the problem as much as possible; b) EA is addressing symptoms rather than underlying causes, i.e. distributing bed-nets instead of overthrowing corrupt governments. This is fair as far as it goes, but the move to tackling underlying causes does not necessarily require abandoning the quantitative methods EA champions, and it is not at all clear that we shouldn’t attempt to alleviate symptoms as well as causes.
-
The philosophical critique—Essentially amounts to arguing that there are people critical of consequentialism and abstract conceptions of reason. More power to them, but that fact in itself does not defeat consequentialism, so in so far as EA relies on consequentialism, it can continue to do so. A deeper dive is required to understand the criticisms in question, but there is little reason for me to assume at this point that they will defeat, or even greatly weaken, consequentialist theories of ethics. Crary actually admits that in academic circles they fail to convince many, but dismisses this because in her opinion it is “a function of ideological factors independent of [the arguments’] philosophical credentials”.
-
The composite critique—adds nothing substantial except to pit EA against woke ideology. I don’t believe these two movements are necessarily at odds, but there is a power struggle going on in academia right now, and it is clear which side Crary is on.
-
EA’s moral corruption—EA is corrupt because it supports global capitalism. I am guilty as charged on that count, even as I see capitalism’s many, many flaws and the need to make some drastic changes. Still, just like democracy, it is the best of evils until we come up with something better. Working within this system to improve the lives of others and solve some pressing worldwide problems seems perfectly reasonable to me.
As an aside I will mention that attacking “earning to give” without mentioning the concept of replicability is attacking nothing at all. When doing good try to be irreplaceable, when earning money on Wall Street, make sure you are completely replaceable, you might earn a little less but you will minimize your harm.
Finally, it is telling that Crary does not once deal with longtermist ideas.
-
What would you say are the biggest benefits of being part of an EA faith group?
From a broad enough perspective no cause area EA deals with is neglected. Poverty? Billions donated annually. AI? Every other start up uses it. So we start narrowing it down: poverty → malaria-> bednets.
There is every reason to believe mental health has neglected yet tractable and highly impactful areas, because of the size of the problem as you outline it, and because mental health touches all of us all the time in everything we do (when by health we don’t just mean the absence of disease but the maximization of wellbeing).
I think EA concepts are here to challenge us. Being a clinical psychiatrist is amazing, you can probably help hundreds of people. Could you do more? What’s going on in other parts of the globe, where is humanity headed towards in the future? This challenge does not have to be burdensome, it can be inspiring. It should certainly not paralyze you and prevent you from doing any good at all. Like a mathematician obsessed with proving a theorem, or a physicist relentlessly searching for the theory of everything, they also do other work, but never give up the challenge.
Hey @Dvir, mental health is a (not-professional) passion of mine so I am grateful for any attention given to it in EA. I wonder if you think a version 2.0 of your pitch can be written, which takes into account the 3 criteria below. Right now you seem to have nailed down the 1st, but I don’t see the case for 2 & 3:
Great in scale (it affects many lives, by a great amount)
Highly neglected (few other people are working on addressing the problem)
Highly solvable or tractable (additional resources will do a great deal to address it) (https://80000hours.org/articles/problem-framework/)
I think that is what HLI is trying to do: https://forum.effectivealtruism.org/posts/uzLRw7cjpKnsuM7c3/hli-s-mental-health-programme-evaluation-project-update-on https://forum.effectivealtruism.org/posts/v5n6eP4ZNr7ZSgEbT/jasper-synowski-and-clare-donaldson-identifying-the-most
I am not sure about the etiquette of follow up questions in AMAs, but I’ll give it a go:
Why does being mainstream matter? If, for example, s-risk is the highest priority cause to work on, and the work of a few mad scientists is what is needed to solve the problem, why worry about the general public’s perception of EA as a movement, or EA ideas? We can look at growing the movement as growing the number of top performers and game-changers, in their respective industries, who share EA values. Let the rest of us enjoy the benefit of their labor.
Well, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this.
(if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)
Hi Khorton,
If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few).
For a great exploration of this topic I refer to this talk by Nick Bostrom: http://www.stafforini.com/blog/bostrom. The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.
I am largely sympathetic to the main thrust of your argument (borrowing from your own title: I am probably a negative utilitarian), but I have 2 disagreements that ultimately lead me to a very different conclusion on longtermism and global priorities:
Why do you assume we cannot effect the future further than 100 years? There are numerous examples of humans doing just that: in science (inventing the wheel, electricity or gunpowder), government (the US constitution), religion (the Buddhist Pali cannon, the Bible, the Quran), philosophy (utilitarianism), and so on. One can even argue that the works of Shakespeare have had an effect on people for hundreds of years.
Though humanity is not inherently awesome, it does not inherently suck either. Humans have the potential to do amazing things, for good or evil. If we can build a world with a lot less war and crime and a lot more collaboration and generosity, isn’t it worth a try? In Parfit’s beautiful words: “Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea … Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.”
I thought it worth pointing out that this statement from one of your comments I mostly agree with, while I strongly disagree with your main post. If this was the essence of your message, maybe it requires clarification:
“Politics is the mind killer.” Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in.
To be clear, I think justice does actually matter, and any movement that would look past it to “more important” considerations scares me a little, but I strongly agree with the “weather” and “comparative advantage” parts of your statement. We should practice patience and humility. By patience I means not jumping into the hot topic conversation of the day, no matter how heated the debate. Humility means recognizing how much effort we spend learning about animal advocacy, malaria, X risk factors, etc. That is why we can feel confident to speak/act on them. But this doesn’t automatically transfer to other issues. Merely recognizing how difficult it is to get altruism right, compared to how much ineffective altruism there is, should be a warning signal when we wade out of our domains of expertise.
I think the middle ground here is not to allow people to bully you out of speaking, but to only speak when you have something worth saying that you considered carefully (preferably with some input from peers). So basically, as others have already mentioned: “what would Peter Singer do?”.
I have similar objections to this post as Khorton & cwbakerlee. I think it shows how the limits of human reason make utilitarianism a very dangerous idea (which may nevertheless be correct), but I don’t want to discuss that further here. Rather, let’s assume for the sake of argument that you are factually & morally correct. What can we learn from disasters, and the world’s reaction to them, that we can reproduce without the negative effects of the disaster? I am thinking of anything from faking a disaster (wouldn’t the conspiracy theorist love that) to increasing international cooperation. What are the key characteristics of a pandemic or a war that make the world change for the better? Is the suffering an absolute necessity?
Yes, you are correct and thank you for forcing me to further clarify my position (in what follows I leave out WAW since I know absolutely nothing about it):
-
EA funds, which I will assume is representative of EA priorities has these funds a) “Global Health and Development”; b) “Animal Welfare”; c) “Long-Term Future”; d) “EA Meta”. Let’s leave D aside for the purposes of this discussion.
-
There is good reason to believe the importance and tractability of specific climate change interventions can equal or even exceed those of A & B. We have not done enough research to determine if this is the case.
-
The arguments in favor of C being the only area we should be concerned with, or the area we should be most concerned with, are:
I) reminiscent of other arguments in the history of thought that compel us (humans) because we do not account for the limits of our own rationality. I could say a lot more about this another time, suffice it to say here that in the end I cautiously accept these arguments and believe x-risk deserves a lot of our attention.
II) are popular within this community for psychological as well as purely rational reasons. There is nothing wrong with that and it might even be needed to build a dedicated community.
III) For these reasons I think we are biased towards C, and should employ measurements to correct for this bias.
-
None of these priorities is neglected by the world, but certain interventions or research opportunities within them are. EA has spent an enormous amount of effort finding opportunities for marginal value add in A, B & C.
-
Climate change should be researched just as much as A & B. One way of accounting for the bias I see in C is to divert a certain portion of resources to climate change research despite our strongly held beliefs. I simply cannot accept the conclusion that unless climate change renders our planet uninhabitable before we colonize Mars, we have better things to worry about. That sounds absurd in light of the fact that certain detrimental effects of climate change are already happening, and even the best case future scenarios include a lot of suffering. It might still be right, but it’s absurdity means we need to give it more attention.
What surprises me the most from the discussion of this post (and I realize it’s readers are a tiny sample size of the larger community) is that no one has come back with: “we did the research years ago, we could find no marginal value add. Please read this article for all the details”.
-
The assumption is not that people outside EA cannot do good, it is merely that we should not take it for granted that they are doing good, and doing it effectively, no matter their number. Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell. So the claim that climate change work is or is not the most good has no merit without a deeper dive into the field and a search for incredible giving / working opportunities. Any shallow dive into this cause reveals further attention and concern are warranted. I do not know what the results of a deeper dive might show, but am fairly confident we can at least be as effective working on climate change as working on some of the other present day welfare causes.
I do believe that there is strong bias towards the far future in many EA discussions. I am not unsympathetic to the rational behind this, but since it seems to override everything else, and present day welfare (as your reply implies) is merely tolerated, I am cautious about it.
Reading the discussions here I cannot shake the intuition that utilitarianism with very big numbers is once again resulting in weird conclusions. AW advocates are basically describing earth as hell with a tiny sanctuary reserved for humans that are better off than average. I need more convincing. While I cannot disagree with the math or data, I think better theories of animal suffering are needed. At what point is a brain sufficiently developed, for example, to experience suffering in a way that is morally relevant, that we should care about? Are there qualitative differences that override all quantitative ones, and if so which are those? All the same, I do not completely disagree because 1) moral circle widening is very important to me; 2) at the end of the day I would not compare causes, but specific interventions. There could very well be a highly effective intervention in the animal space that is better than anything GiveWell does, but I am unaware of it.