Ah, I see that now. Thanks.
FWIW, I was specifically looking for a disclaimer and it didn’t quickly come to my attention. It looks like a few other people in these subthreads may have also missed the disclaimer.
Yeah, I hadn’t realized it was more or less deprecated. (The page itself doesn’t seem to give any indication of that. Edit: Ah, it does. I missed the second paragraph of the sidenote when I quickly scanned for some disclaimer.)
Also, apparently unfortunately, it’s the first sublink under the 80,000 Hours site on Google if you search for 80,000 Hours.
It seems quite possible to me have a “parameterized list”. That is, recommendations can take the shape “If X is true of you, Y and Z are good options.” And in fact 80,000 Hours does do this to some degree (via, for example, their career quiz). While this isn’t entirely personalized (it’s based only on certain attributes that 80,000 Hours highlights), it’s also far from a single, definitive list. So it doesn’t seem to be that there’s any insoluble tension between taking account of individual difference and communicating the same message to a broad audience—you just have to rely on the audience to do some interpreting.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people . It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
 I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
I am not OP but as someone who also has (minor) concerns under this heading:
Some people judge HPMoR to be of little artistic merit/low aesthetic quality
Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)
If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.
Clearly, there also many people that like HPMoR and don’t have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
I think you have some math errors:
$150k * 1.5 + $60k = $285k rather than $295k
Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
I have a pretty averse reaction to all the people you named, expect I would feel similarly about someone in that mold in EA, and expect many other people in EA would feel similarly. I don’t think charismatic leadership fits all that well with the other elements of EA in ways both important and incidental.
I don’t know how promising others think this is, but I quite liked Concepts for Decision Making under Severe Uncertainty with Partial Ordinal and Partial Cardinal Preferences. It tries to outline possible decision procedures once you relax some of the subject expected utility theory assumptions you object to. For example, it talks about the possibility of having a credal set of beliefs (if one objects to the idea of assigning a single probability) and then doing maximin on this i.e. selecting the outcome that has the best expected utility according to its least favorable credences.
There’s actually a thing called the Satisficer’s Curse (pdf) which is even more general:
The Satisficer’s Curse is a systematic overvaluation that occurs when any uncertain prospect is chosen because its estimate exceeds a positive threshold. It is the most general version of the three curses, all of which can be seen as statistical artefacts.
IIRC, the mechanism has problems with collusion/dissembling. For example, one backer with $46 dollars and 4 backers with $1 each will get significantly better results by splitting their money into 5 contributions of $10 each. This seems like a problem that’s actually moderately likely to arise in practice.
It looks like the case you’re making in the “a prize” section is that prizes are more open to “outsiders” than grants which seems generally plausible to me. On the other hand, grants can actually fund the research itself while contestants for a prize need some source of funding. If it’s capital-intensive to mount a serious attempt at the prize, this creates a funding and vetting problem again (contestants will need money to bankroll their attempt).
I’m not convinced that a prize is particularly helpful in this case. I think of prizes as useful for inducing investment in things like public goods where private returns are limited. That doesn’t seem to be the case here; successfully creating “radically better energy generation” seems like it would be wildly remunerative. The promise of vast wealth seems like it ought to be sufficient incentive regardless of a prize.
OTOH, that’s all very first-principles and the history of innovation prizes doesn’t seem to really pay much attention to this line of criticism. Maybe prizes make particular problems more salient, etc.
This is interesting! I think it would also be useful to talk about the standard terminology in the field. Some of those terms are:
Aleatoric and epistemic uncertainty
Decisions under risk vs decisions under ignorance
Reasons I think it’s useful to talk about standard terminology:
Allows you to converse with others and understand their work more easily
Allows readers to follow up and connect with a larger body of work
Communicates to experts that you’ve seriously engaged with the field and understand it
In this particular case, I’d be interesting in hearing how your categories map to the standard ones. Or, if you think they don’t, it would be interesting to hear why that is. What are the inadequacies of the standard terms and categories?
This seems very related to social impact bonds: “Social Impact Bonds are a type of bond, but not the most common type. While they operate over a fixed period of time, they do not offer a fixed rate of return. Repayment to investors is contingent upon specified social outcomes being achieved.”
Yup. It’s in Chapter 23, The Nature and Significance of Happiness.
I found a passage from the book that’s much more on the nose:
But here we will focus on a deeper threat to the importance of LS, one that stems from the very nature and point of LS attitudes. How satisfied you are with your life does not simply depend on how well you see your life going relative to your priorities. It also depends centrally on how high you set the bar for a “satisfactory” life: how good is “good enough?” Rosa might be satisfied with her life only when getting almost everything she wants, while Juliet is satisfied even when getting very little of what she wants—indeed, even when most of her goals are being frustrated. It can seem odd to think that satisfied Juliet, for whom every day is a new kick in the teeth, is better off than dissatisfied Rosa, who nonetheless succeeds in almost all the things she cares about but is more demanding. More to the point, it is not clear why LS should be so important insofar as it is a matter of how high or low individuals set the bar. Suppose Rosa has a lengthy, and not inconsequential, “life list,” and will not be satisfied until she has checked off every item on the list. It is not implausible that we should care about how well Rosa achieves her priorities—e.g., whether her goals are mostly met or roundly frustrated. But should anyone regard it as a weighty matter whether she actually gets every last thing on her list, and thus is satisfied with her life? It is doubtful, indeed, that Rosa should put much stock in it. The point here is not simply that LS can reflect unreasonable demands, but that it depends on people’s standards for a good enough life, and these bear a problematic relationship to people’s well-being, depending on various factors that have no obvious relationship to how well people’s lives are going for them. It may happen that Rosa comes to see her standards as unreasonably high and revises them downwards—not because her priorities change, but because she now finds it unseemly to be so needy. In this case, what drives her LS is, in part, the norms she takes to apply to her attitudes—how it is fitting to respond to her life. Such norms likely influence most people’s attitudes toward their lives—a wish to exhibit virtues like fortitude, toughness, strength, or exactingness, non-complacency, and so forth. How satisfied we are with our lives partly depends, in short, on the norms we accept regarding how it is appropriate to respond to our lives. Note that most of us accept a variety of such norms, pulling in different directions, and it can be somewhat arbitrary which norms we emphasize in thinking about our lives. You may value both fortitude and not being complacent, and it may not be obvious which to give more weight in assessing your life. You may, at diff erent times, vary between them. Similarly, LS depends on the perspective one adopts: relative to what are you more or less satisfied? Looking at Tiny Tim, you may naturally take up a perspective on your life that makes your good fortune more salient, and so you reasonably find yourself pretty satisfied with things. Then you think about George Clooney, and your life doesn’t look so good by comparison: your satisfaction drops. Worse, it is doubtful that any perspective is uniquely the right one to take: again, it is somewhat arbitrary. Unless you are like Rosa and have bizarrely—not to say childishly—determinate criteria for how good your life has to be to qualify as a satisfactory one, it will be open to you to assess your life from any of a number of vantage points, each quite reasonable and each yielding a different verdict. Indeed, the very idea of subjecting one’s life to an all-in assessment of satisfactoriness is a bit odd. When you order a steak prepared medium and it turns up rare, its deficiencies are immediately apparent and your dissatisfaction can be given plain meaning: you send it back. Or, you don’t return to that establishment. But when your life has annoying features, what would it mean to deem it unsatisfactory? You can’t very well send it back. (Well . . .) Nor can you resolve to choose a different one next time around. It just isn’t clear what’s at stake in judging one’s life satisfactory or otherwise; lives are vastly harder to judge than steaks; and anyway, what counts as a reasonable expectation for a life is less than obvious since the price of admission is free—you’re just born, and there you are. So it is hard to know where to set the bar, and unsurprising that people can be so easily gotten by trivial influences to move it (Schwarz & Strack, 1999). You might be satisfi ed with your life simply because it beats being dead. The ideal of life satisfaction arguably imports a consumer’s concept, one most at home in retail environments, into an existential setting where metrics of customer satisfaction may be less than fitting. (It is an interesting question how far people spoke of life satisfaction before the postwar era got us in the habit of calling ourselves “consumers.”) In short, LS depends heavily on where you set the bar for a “good enough” life, and this in turn depends on factors like perspectives and norms that are substantially arbitrary and have little bearing on your well-being. Th e worry is not that LS fails to track some objective standard of well-being, but that we should expect that it will fail to track any sane metric of well-being, including the individual’s own. To take one example: Studies suggest that dialysis patients report normal levels of LS, which might lead us to think they don’t really mind it very much. Yet when asked to state a preference, patients said they would be willing to give up half their remaining life-years to regain normal kidney function (Riis et al., 2005 ; Torrance, 1976 ; Ubel & Loewenstein, 2008). This is about as strong as a preference gets. A plausible supposition is that people don’t adjust their priorities when they get kidney disease so much as they adjust their standards for what they’ll consider a satisfactory life. LS thus obscures precisely the sort of information one might expect it to provide—not because of errors or noise, but because it is not the sort of thing that is supposed in any straightforward way to yield that information. LS is not that sort of beast. The claim is not that LS measures never provide useful information about well-being. In fact they frequently do, because the perceived welfare information is in there somewhere, and differences in norms and perspectives may often cancel out over large populations. They may not cancel out, however, where norms and perspectives systematically differ, and this is a serious problem in many contexts, especially cross-cultural comparisons using LS (Haybron, 2007, 2008). But what the points raised in this section chiefly indicate about LS measures is that we cannot support conclusions about absolute levels of well-being with facts about LS. That people are satisfied with their lives does not so much as hint that their lives are going well relative to their priorities. If we wish reliably to assess how people see their lives going for them, we need a better yardstick than life satisfaction.
Ah, yeah. I didn’t mean to suggest that the philosophers have it all worked out. What I meant is that I think the philosophers seem to share your goals. In other words, I (as a non-professional in either psychology or philosophy) think if someone came up to a psychologist and said, “I’ve come up with these edge cases for ‘life satisfaction’“, they’d more or less reply, “That’s regrettable. Moving on...“. On the other hand, if someone came up to a philosopher and said, “I’ve come up with edge cases for ‘eudaimonia’“, they might reply, “Yes, edges cases like these are among my central concerns. Here’s the existing work on the matter and here are my current attempts at a resolution.”
Subsidizing a prediction market seems like one of the more promising approaches to me. There’s a write-up of would that would look like more concretely at: Subsidizing prediction markets. Unfortunately, a quick search also turns up a theoretical limitation of this approach: Subsidized Prediction Markets for Risk Averse Traders.
My impression is that the term “life satisfaction” sees the heaviest use in psychology where full philosophical analysis of the necessary and sufficient properties of “life satisfaction” isn’t especially desired or useful. As long as it the term denotes a concept with some internal consistency and we all use the term in roughly compatible ways, we can usefully use it in measurements.
If you’re looking for a concept that’s a load-bearing part of your ethics, primarily psychological constructs like “life satisfaction” aren’t a great fit. I think the discussions you’d want to look at for these more philosophical purposes are discussions around eudaimonia, hedonia, etc.