I read this post and the comments that have followed it with great interest.
I have two major, and one minor, worries about QRI’s research agenda I hope you can clarify. First, I am not sure exactly which question you are trying to answer. Second, it’s not clear to me why you think this project is (especially) important. Third, I can’t understand what STV is about because there is so much (undefined) technical jargon.
1. Which question is QRI trying to answer?
You open by saying:
We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like?
This makes me think you want to identify what suffering is, that is, what it consists in. But you then immediately raise Buddhist and Arisotlean theories of what causes suffering—a wholly different issue. FWIW, I don’t see anything deeply problematic in identifying what suffering, and related terms, refer to. Valence just refers to how good/bad you feel (the intrinsic pleasurableness/displeasurableness of your experience); happiness is feeling overall good; suffering is feeling overall bad. I don’t find anything dissatisfying about these. Valence refers to something subjective. That’s a definition in terms of something subjective. What else could one want?
It seems you want to do two things: (1) somehow identify which brainstates are associated with valence and (2) represent subjective experiences in terms of something mathematical, i.e. something non-subjective. Neither of these questions is identical to establishing either what suffering is, or what causes it. Hence, when you say:
QRI thinks not having a good answer to the question of suffering is a core bottleneck
I’m afraid I don’t know which question you have in mind. Could you please specify?
2. Why does that all matter?
It’s unclear to me why you think solving either problem - (1) or (2) - is (especially) valuable. There is some fairly vague stuff about neurotech, but this seems pretty hand-wavey. It’s rather bold for you to claim
there are trillion-dollar bills on the sidewalk, waiting to be picked up if we just actually try
and I think you owe the reader a bit more to bite into, in terms of a theory of change.
You might offer some answer about the importance of being able to measure what impacts well-being here but—and I hope old-time forum hands will forgive me as I mount a familiar hobby-horse—economics and psychology seem to be doing a reasonable job of this simply by surveying people, e.g. asking them how happy they are (0-10). Such work can and does proceed without a theory of exactly what is happening inside the ‘black box’ of the brain; it can be used, right now, to help us determine what our priorities are - if I can be permitted to toot my horn from aside the hobby-horse, I should add that this just is what my organisation, the Happier Lives Institute, is working on. If I were to insist on waiting for real-time brain scanning data to learn whether, saying, cash transfers are more cost-effective than psychotherapy at increasing happiness, I would be waiting some time.
3. Too much (undefined) jargon
Here is a list of terms or phrases that seem very important for understanding STV where I have very little idea exactly what you mean:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
resonance as a proxy for characteristic activity
Consonance Dissonance Noise Signature
full neuroimaging stack
precise physical formalism for consciousness
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape.
If this is the ‘primer’, I am certainly not ready for the advanced course(!).
Thanks for this answer! It was really helpful. I hadn’t spotted that the ‘empty world’ really was empty in the experiment; not sure how I missed that.
Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics! The question I was pressing is what, if anything, the authors think we should infer from data about intuitions. One might think you should update toward people’s intuitions, but that’s not obvious to me, not least when (1) in aggregate, people’s answers are inconsistent and (2) this isn’t something they’ve thought about.
I found this paper really interesting—so, thanks!
Two questions and a comment
First question: in broad terms, what do you think moral philosophers should infer from psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population—the “500 million Elvis fans can’t be wrong” approach.
This is tempting, except that the views of the average person appear inconsistent (ie they weigh suffering more but also think creating neutral lives is good) and implausible, by the lights of views amongst philosophers (eg those surveyed believe adding unhappy lives can be good where it increases average happiness). Even if the views were coherent and plausible (eg those surveyed congregated on a single, consistent view) it would still seem open to philosophers to discount the views of non-experts who hadn’t really familiarised themselves enough with the literature and so did not constitute epistemic peers.
Second question: for the adding people experiment, how confident should we be that those surveyed were thinking solely about the value of adding the new person, as it relates to that person themselves, and not instead thinking about the effects adding a life has on other people? In skimming the paper, I couldn’t see anything about how you had tested the participants were answered the right question.
I ask because, when I speak to people about the value of adding new lives, it is incredibly hard to get people to think just about the value related to the created individuals, and not that person’s parents, society, etc. Yet, to find out their views on population ethics, people need to realise they are just thinking about the effects regarding the created individual themself only. Of course, I might say that adding a happy life is very good, but that’s just because I am thinking it is good for the parents, etc.; conversely, I could answer that adding unhappy lives are bad because they are a drain on others. If I do this, I wouldn’t have answered the question you want me to. As such, it’s not clear to me your experiment has really tested what you said it has.
A comment: in the one about adding lives, you describe the populations as ‘empty’ and ‘full’. This is confusing, as in the paper ‘empty’ actually means 1 millon people, not an actually empty world; ‘full’ means 10 billion (which is questionably ‘full’, either). I think you should flag this more clearly and/or use different terms. ‘Small’ and ‘large’ might be better. I can imagine people having different intuitions if there are genuinely no people existing at the time, and also if the world seems more genuinely full, eg had 100 billion people.
When people say “EAs should do X”, it’s usually wise to reflect on whether that is really the case—are there skills or mindsets that members of the EA community are bringing to X?
The case I would like to see made her is why EA orgs would benefit from getting mental health services from some EA provider rather than the existing ones available. Could you elaborate on why you think this is the case? I’m not sure why you think current mental services, eg regular therapists are unapproachable and how having an ‘EA’ service would get around this. I don’t buy the access point, at least not for EA orgs: access is a question of funding, and that’s something EA orgs plausibly have. Demand for a service leads to more of it being supplied (of course, there are elasticities). If I buy more groceries, it’s not like someone else goes hungry, it’s more like more groceries get produced.
No, this isn’t what I’m thinking about. I don’t understand what you’re saying here.
I assume you didn’t mean it this way, but I found the tone of this comment rather brusque and dismissive. Please be mindful of that for discussions, particularly those in the EA forum.
I’m not sure how else to explain my point. One approach to MH is to talk to each individual about what they can do. Another approach, the organisational psychology one, is to think about how to change office culture and working practices. Sort of bottom-up vs top-down.
Given my original comment, I think it’s appropriate to give a broad view of the potential forms the intervention can take and what can be achieved by a strong founding team. These services can take forms that don’t currently exist. I think it’s very feasible to find multiple useful programs or approaches that could be implemented.
Given my original comment, I think it’s appropriate to give a broad view of the potential forms the intervention can take and what can be achieved by a strong founding team.
These services can take forms that don’t currently exist. I think it’s very feasible to find multiple useful programs or approaches that could be implemented.
I’d be interested to hear you expand on what you mean here!
While I am sympathetic to the idea of doing lots of well-being stuff, it’s not obvious why this needs a new EA-specified org.
To restate, I take it thought is that improving mental health of EAs could be a plausible priority because of the productivity gains from those people, which allows them to do more good—saliently, the main benefit of this isn’t supposed to come from the welfare gains to the treated people.
Seeing as people can buy mental health treatments for themselves, and orgs can pay for it for their staff, I suppose the intervention you have in mind is to improve the mental health of organisation as a whole—that is, change the system, rather than keep the system fixed but help the people in the system. This is a classic organizational psychology piece, and I’m sure there are consultants EAs orgs could hire to help them with this. Despite being a huge happiness nerd, I’m actually not familiar with the world of happiness/mental health organisational consultancies. One I do know of is Friday Pulse, but I’m sure they aren’t the only people who try to do this sort of thing.
Given such things exist, it’s not obvious why self-described effective altruists should prioritise setting up more things of this type.
Regarding the well-being section, you say:
The differences between these theories are of primarily theoretical interest; they overlap sufficiently in practice that the practical implications of utilitarianism are unlikely to depend upon which of these turns out to be the correct view.
But you don’t substantiate or explain this. As a helpful suggestion, you could add a line later on pointing out that, if the different theories will agree, in practice, on which things make life go well vs badly, they are likely to agree about what sort of practical actions are good vs bad. However, different theories of well-being may well disagree on what the priorities are amongst actions, and one would need to get further into the details to investigate this.
To bang a drum: while I appreciate the effort to communicate utilitarianism to a wider world, the bit on population ethics seemed, for my tastes, too much of an opinionated ‘Trojan horse’ to lead the reader to the author’s (or authors’) practical priorities. As I’ve moaned elsewhere on this Forum, I like Introductions to be introductions, not plugs.
Yeah, I’d like this. For stuff that doesn’t get comments, it would be really interesting to know whether people read it or not.
I’m not sure if you’re disagreeing with my toy examples, or elaborating on the details—I think the latter.
Right. You’d have a fuzzy line to represent the confidence interval of ex post value, but you would still have a precise line that represented the expected value.
Thanks for this! Some minor points.
I’m puzzled by what’s going on in the category “Other near-term work (near-term climate change, mental health)”. The two causes in parentheses are quite different and I have no idea what other topics fall into this. Also, this has 12% of the people, but >1% of the money: how did that happen? What are those 12% of people doing?
Also, shouldn’t “global health” really be “global health and development”? If it’s just “global health” that leaves out the economic stuff, e.g. Give Directly. Further, global health should probably either include mental health, or be specified as “global physical health”.
I was thinking about this recently too, and vaguely remember it being discussed somewhere and would appreciate a link myself.
To answer the question, here’s a rationale for diversification that’s illustrated in the picture below that I just whipped up.
Imagine you have two causes where you believe their cost-effectiveness trajectories cross at some point. Cause A does more good per unit resources than cause B at the start but hits diminishing marginal returns faster than B. Suppose you have enough resources to get to the crossover point. What do you do? Well, you fund A up to that point, then switch to B. Hey presto, you’re doing the most good by diversifying.
This scenario seems somewhat plausible in reality. Notice it’s a justification for diversification that doesn’t rely on appeals to uncertainty, either epistemic or moral. Adding empirical uncertainty doesn’t change the picture: empirical uncertainty basically means you should draw fuzzy lines instead of precise ones, and it’ll be less clear when you hit the crossover.
What’s confusing for me about the worldview diversification post is that it seems to run together two justifications for, in practice, diversifying (i.e. supporting more than one thing) that are very different in nature.
One justification for diversification is based on this view about ‘crossovers’ illustrated above: basically, Open Phil has so much money, they can fund stuff in one area to the point of crossover, then start funding something else. Here, you diversify because you can compare different causes in common units and you so happen to hit crossovers. Call this “single worldview diversification” (SWD).
The other seems to rely on the idea there are different “worldviews” (some combination of beliefs about morality and the facts) which are, in some important way, incommensurable: you can’t stick things into the same units. You might think Utilitarianism and Kantianism are incommensurable in this way: they just don’t talk in the same ethical terms. Apples ‘n’ oranges. In the EA case, one might think the “worldviews” needed to e.g. compare the near-term to the long-term are, in some relevant sense incommensurable—I won’t to try to explain that here, but may have a stab at in another post. Here, you might think you can’t (sensibly) compare different causes in common units. What should you do? Well, maybe you give each of them some of your total resources, rather than giving it all to one. How much do you give each? This is a bit fishy, but one might do it on the basis of how likely you think each cause is really the best (leaving aside the awkward fact you’ve already said you don’t think you can compare tem). So if you’re totally unsure, each gets 50%. Call this “multiple worldview diversification” (MWD).*
Spot the difference: the first justification for diversification comes because you can compare causes, the second because you can’t. I’m not sure if anyone has pointed this out before.
*I think MWD is best understood as an approach dealing with moral and/or empirical uncertainty. Depending on the type of uncertainty at hand, there are extant responses about how to deal with the problem that I won’t go into here. One quick example: for moral uncertainty, you might opt for ‘my favourite theory’ and give everything to the theory in which you have most credence; see Bykvist (2017) for a good summary article on moral uncertainty.
You might think it’s reasonable to discount based on psychological similarity: something is less valuable to your later self the less like you that person is. Cf. The Time-Relative Interest Account of the badness of death (e.g. Holtug 2011). This wouldn’t justify a pure time preference, but it would justify a contingent time preference: in reality, you value stuff less the further in the future it happens, but not because of time per se, but because of reduced psychological connectededness, which so happens to occur of time.
I point this out to show that someone accept your reductio but get much the same practical result by other means.
Of course, someone who took this view would agree that some harm of size S that befalls you just before you enter the cryo chamber would be just as bad as one that befalls you as soon as you get out.
I’m really pleased to see this: I have been wondering how one would do an EA-minded evaluation of the cost-effectiveness of a start-up that runs it head to head with things like AMF. I’m particularly pleased to see an analysis of a mental health product.*
I only have one comment. You say:
The promise of mobileHealth (mHealth) is that at scale apps often have ‘zero marginal cost’ per user (much less than $12.50) and so plausibly are very cost-effective
It doesn’t seem quite that tech products have zero marginal cost. Shouldn’t one include the cost of acquiring (and supporting?) a user, e.g. through advertising? This has a cost, and this cost would need to be lower than $12.50 per user, given your other assumptions. I have no idea what user acquisition costs are and if $12.5 is high or low.
*(Semi-obligatory disclaimer: Peter Brietbart, MindEase’s CEO, is the chair of the board of trustees for HLI, the organisation I run)
Uhh… that shouldn’t happen from just re-plotting the same data. In fact, how is it that in the original graph, there is an increase from $400,000 to $620,000, but in the new linear axis graph, there is a decrease?
So, there was a discrepancy between the data provided for the paper and the graph in the paper itself. The graph plotted above used the data provided. I’m not sure what else to say without contacting the journal itself.
this seems to imply that rich people shouldn’t get more money because it barely makes a difference, but this also applies to poor people as well, casting doubt on whether we should bother giving money away.
I don’t follow this. The claim is that money makes less of a difference what one might expect, not that it makes no difference. Obviously, there are reasons for and against working at, say, Goldman Sachs besides the salary. It does follow that, if you receiving money makes less of a difference than you would expect, then you giving it to other people, and them receiving it, will also make a smaller-than-anticipated difference. But, of course, you could do something else with your money that could be more effective than giving it away as cash—bednets, deworming, therapy, etc.
I also know almost nothing about US tax law. Call me a cynic but it seems plausible that lots (nearly all?) of the people putting their money into foundations and not spending it are doing so for tax reasons, rather than because they have a sincere concern for the longterm future.
As a communications point, this does make me wonder if longtermist philanthropists who hypothetical campaigned for such a ‘loophole’ to remain open will, by extension, be seen as unscrupulous tax dodgers.
So, if you look at OECD (2013, Annex A) there’s a few example questions about subjective well-being. The eudaimonic questions are sort of in your area (see p 251), e.g. “I lead a purposeful and meaningful life”, and “I am confident and capable in the activities that are important to me”.
You might also be interested by Kahneman’s(?) distinctions of decision vs remembered vs experience utility. Sounds like your question taps into “how will I, on reflection, feel about this decision?” and you’re sampling your intuitions about how you judge life.
He may well have been asked this before, but I’d want to know what, if anything, he thinks would be lost be replaced the SDGs—at the least insofar as they apply to current humans—with a measure of happiness.
Also, if/how he thinks about intergenerational trade-offs.