Introduction
In my earlier post Estimating value from pairwise comparisons I wrote about a reasonable statistical model for the pairwise comparison experiments that Nuño Sempere at QURI have been doing (see also his sequence on estimating value). While writing that post I started thinking about fields where utility extraction is important and decided to take a look at health economics and environmental economics. This post is a write-up of my attempt at a light survey of the literature on this topic, with particular attention paid on pairwise experiments.
What do I mean by pairwise comparisons? Suppose I ask you “Do you prefer to lose your arm or your leg?” That’s a binary pairwise comparison between the two outcomes , , where and . Such comparison studies are truly widespread, going back at least to McFadden (1973), which has Google Scholar citations! Models such as these are called discrete choice models, and I will refer to them as dichotomous (binary) comparisons as well, which is terminology I’ve seen in the economics literature. These models cannot measure the scale of the preferences properly though. There are many reasons why we care about the scale of preferences/utilities. For instance, we need scaling to compare preferences between different studies, and we need scales when we face uncertainty, as part of expected utility theory.
To take scale into account we can ask questions such as “How many times worse would it be to lose an arm than losing a leg?”. Then you might answer, say, , so you think losing a leg is ten times worse than losing an arm. Or , so you think losing an arm is ten times worse than losing a leg. These questions are harder than the corresponding binary questions though, and I can image respondents being flabbergasted by them. Questions of this kind are called graded (or ratio) comparisons in the literature. The idea is old—it goes way back to Thurstone (1927)!
I’m excited about the prospect of using pairwise comparisons on a large scale. Here are some applications:
Estimate the value of research, both in the context of academia and effective altruism. This post presents a small-scale experiment in the EA context. It would be interesting to do a similar experiment inside of academia. Probably more rigorous and lengthy though. In my experience many academics do not feel that their or other people’s work is important. They research whatever is publishable since it’s their job. Attempting to quantify researchers understanding of the value of their and other people’s research could at least potentially push some researchers into a more effective direction.
Estimating the value of EA projects. This should be pretty obvious. One of the potentials of the pairwise value estimation method is crowd-sourcing—since it’s so easy to say “I prefer to ”, or perhaps ” is times better than ”—the bar for participation is likely to be lower than, say, participating in Metaculus, which is a real hassle. Possible applications would be crowd-sourcing of valuation of small projects, e.g. something like Relative Impact of the First 10 EA Forum Prize Winners.
Descriptive ethics. You could estimate moral weights for various species. You could get an understanding about how people vary in the their moral valuations. You could run experiments akin to the experiments underlying moral foundations theory, but with a much more quantitative flavor. I haven’t thought deeply about it, but I imagine studies of this sort would be important in the context of moral uncertainty.
Summary of thoughts from the short literature review
See the linked post for more information.
-
I had a peek at value estimation in economics and marketing. There is a sizable literature here, and more work is needed to figure out what exactly is relevant for effective altruists. Discrete choice models are applied a lot in economics, but these models are not able to estimate the scaling of the values. Marketing researchers prefer graded pairwise comparisons, which is equivalent to the pairwise method used here, but with limits on how much you can prefer one choice to another.
-
I’m enthusiastic about the prospects of doing larger-scale paired comparison studies on EA topics. The first step would be to finish the statistical framework I started on here, then do a small-scale study suitable for a methodological journal in e.g., psychology or economics. Then we could run a study on a larger scale.
-
Most examples I’ve seen in health economics, environmental economics, and marketing are only tangentially related to effective altruism. (I don’t claim they don’t exist—there’s probably many studies in health economics relevant to EA). But the topics of cognitive burden and experimental design is relevant for anyone who’s involved with value estimation. It would be good to have at least a medium effort report on these topics—I would certainly appreciate it! The literature probably contains a good deal of valuable insights for those sufficiently able and motivated to trudge through it.
-
There is a reasonable number of statistical papers on the graded comparisons. But mostly from the s—s. These will be very difficult to read unless you’re at the level of a capable master student of statistics. But summarizing and extending their research could potentially be an effective thesis!
I found this really useful, kudos for writing it.
I don’t feel like I understand the intense academic focus on choice modeling and similar over asking people directly for the relative values, particularly with something like probability distributions.
I get that the latter requires more sophisticated users, but it also provides far more precision. For a lot of decision making, you really want some simple real utility function.
In EA I think many of our main decision makers are sophisticated enough for the latter methods. Also, for those not yet, I’d be curious to try methods like giving some representative samples the necessary training/education, then doing elicitation on these enlightened groups.
If there really isn’t much literature or tooling on numeric/distribution/utility-function elicitation, it seems like it should be really low hanging fruit.
Really glad to see this, thanks Jonas
Hi Jonas,
This is a fascinating and important topic, but I fear that many EA Forum readers might not be familiar with some of the more technical economic and behavioral sciences terms that you’re using here.
I’d kindly suggest revising or reposting something where terms like ‘value estimation’, ‘discrete choice models’, ‘graded pairwise comparisons’, etc are explained just a bit more, and where the overall significance of value estimation for EA is unpacked a little more? Just a friendly suggestion!
I think it’s important to build more connections between EA approaches to value (e.g. in AI alignment) and existing behavioral sciences methods for studying values.
Thanks for your suggestions! Big fan of yours for many years, by the way. Mating intelligence being the article collection that made we want to become an evolutionary psychologist (ended up a statistician though, mostly due to its much safer career path).
Now I noticed that I didn’t write in the post that these four points are just a summary. The meat of the post is being linked to. I think I have explained these terms in the linked post, at least graded pairwise comparisons and discrete choice models. But yeah… I will modify the summary to use less technical jargon and provide an introduction.
Yes, and also to academia in general. I honestly didn’t think about AI alignment when writing this post, but that could be one of the applications.
Jonas—Hi! Glad that Mating Intelligence was inspiring! I agree that statistics is generally a much safer career path than ev psych.
Will have a proper look at your linked essay...