Senior Behavioural Scientist at Rethink Priorities
Jamie E
Several thousand per month (not double figure thousands)
Great ideas.
For options in 3, there is some overlap with what some other large and established surveys cover—you may for example be interested to look up the General Social Survey (GSS), which has been going for decades and covers a range of social issues like trust in the medical establishment, trust in ‘science’, general social trust, alongside all sorts of other things—you can try searching for variables here: https://gssdataexplorer.norc.org/variables/vfilter
Thanks, we’re excited about it!
Hi Craig—the current restriction to the US population certainly doesn’t represent that we’re not ultimately interested in expanding further. The current funding covers the US, and we started here for several reasons:
Logistical reasons to do with access to samples that is not so feasible in a lot of countries
Lots of up-to-date and readily available info on the US population demographics that make analysis-related choices/possibilities to do with getting to a representative sample more feasible
Lots of existing organisations are particularly interested in US in relation to public opinion/how this might inform policy
But other countries are of interest too for sure!
We feel the same!
At a philosophical level, I don’t really find it very convincing that even a perfect recovery/replica would be righting any wrongs experienced by the subject in the past, but I can’t definitively explain why—only that I don’t think replicas are ‘the same lives’ as the original or really meaningfully connected to them in any moral way. For example, if I cloned you absolutely perfectly now, and then said, I’m going to torture you for the rest of your life, but don’t worry, your clone will be experiencing eqaul and opposite pleasures, would you think this is good (or evens out) for you as the single subject being tortured, and would it correct for the injustice being done to you as a subject experiencing the torture? All that is being done is making a new person and giving them a different experience to the other one.
I’m also very sympathetic to a preference utilitarian perspective, much more so than just suffering vs. happiness. But to me the preference satisfaction comes from the realised state of the world actually being as desired, and not from specifically experiencing that satisfaction. For example, people will willingly die in the name of furthering a cause they want to see realised, knowing full well they will not experience it. One would consider it something of a compensation for their sacrifice if their goals are realised after, or especially because of, their death.
Similarly, I think it would help to right past wrongs if, in the future, the past person’s desired state of the world comes to pass. But I still don’t see how it is any better for that person, or somehow corrected further, if some replica of their self experiences it.
One might imagine that the overall state of the world is more positive because there is this replica that is really ecstatic about their preferences being realised and being able to experience it, but specifically in terms of righting the wrong I don’t think it has added anything. They are not the same subject as the one who experienced the wrong—so it does not correct for their specific experience—and the payout is in any case in the realised state of the world and not in that past subject having to experience it.
Ah I see, yes that seems to make a meaningful difference regarding the need to have the self experience it then. Although I would still question if having the replica achieves this. If we go to the clone example, if I clone you now with all your thoughts and desires and you remain unsatisfied, but I tell you that your clone is—contemporaneous with your continued existence—living a life in which all your desires are satisfied, would you find that satisfying? For me at least that would not be satisfying or reassuring at all. I don’t see a principled way in which stretching the replication process over time so that you no longer exist when the copy is created suddenly changes this. The preference would seem to be that the person’s subjective experience is different in the ways that they hope for, but all that is being done is creating an additional and alternative subjective experience that is like theirs, which experiences the good things instead.
Ha well, I think you might find a fair few people share your intuition, especially in some strands of EA that intersect with transhumanism.
I don’t personally share the intuition, but I think if I did then it would also make sense to me that I would expect the replica’s satisfaction would be correspondingly reduced to the extent they know some other self that they are identified with is or was not satisfied. But I appreciate at this point we’re just getting to conflicting intuitions!
For some further information on Qvist’s background, you could also check out his google scholar page: https://scholar.google.com/citations?hl=en&user=JFopkowAAAAJ&view_op=list_works&sortby=pubdate
Two 2022 papers have ‘repowering coal’ in the title so presumably might have some further background on the strategy or basis of these ideas, though I did not check them out myself:
Repowering a Coal Power Unit with Small Modular Reactors and Thermal Energy Storage
Repowering Coal Power in China by Nuclear Energy—Implementation Strategy and Potential
“Unless critics seriously want billionaires to deliberately try to do less good rather than more, it’s hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.”
I don’t think the only alternative to wanting billionaires to actively try to do good is that you would be arguing for the obviously foolish idea that they should be trying to do less good. There might be many reasons you would not want to promote the ideas of billionaires ‘doing more good’. E.g., you believe they have an inordinate amount of power and in actively trying to do good they will ultimately do harm, either by misalignment or mistakes in EA’s ideas of what would do good, even if the person remains aligned (a particular problem when people have certain magnitudes of money/influence that is not such an issue when people have less power/influence, where the damage will be less). You may also just not want to draw such powerful people’s attention to the orders of magnitude more influence they could have.
I think in your statement you are arguing that the possible effect on billionaires is not an argument against EA principles per se and on that I’d agree, but in my view that reasonable side of the argument loses force when paired with what seems like a silly statement, that people would be arguing something that no person would argue.
Sure, I don’t think what you’re saying is technically incorrect it is just for me rhetorically, I would read you as being less sincere and therefore less convincing in engagement with critics if there seems to be some implication that comes across a bit like ‘unless people believe something stupid, then their critiques don’t make sense’ - but this may also be a reaction to seeing only the excerpted quote and not the whole text
US public opinion of AI policy and risk
Thanks for collating all these—very helpful!
Pretty funny in the opinion!:
“under this Court’s dormant Commerce Clause decisions, no State may use its laws to discriminate purposefully against out-of-state economic interests. But the pork producers do not suggest that California’s law offends this principle. Instead, they invite us to fashion two new and more aggressive constitutional restrictions on the ability of States to regulate goods sold within their borders. We decline that invitation. While the Constitution addresses many weighty issues, the type of pork chops California merchants may sell is not on that list.”
Definitely quite a difference (just to check, are the metaculus numbers the likelihood of that risk being picked as the most likely one, not their likelihood ratings?).
I was struck, though not surprised, by the very strong political differences for the risks. It suggests to me that some people might also be giving some kind of signalling of ‘what we should be most worried about right now’ or perhaps even picking what a ‘good person’ on their side is supposed to pick, as opposed to really carefully sitting and thinking specifically about the most likely thing to cause extinction. That is sort of the opposite way of how I imagine a forecaster would approach such a question.
Given the differences in the questions it doesn’t seem correct to compare the raw probabilities provided across these—also our question was specifically about extinction rather than just a catastrophe. That being said there may be some truth to this implying some difference between the population estimates and what the metaculus estimates imply if we rank them—AI risk comes out top on the metaculus ratings and bottom in the public, and climate change also shows a sizable rank difference.
One wrinkle in taking the rankings like this would be that people were only allowed to pick one item in our questions, and so it is also possible that the rankings could be different if people actually rated each one and then we ranked their ratings. This would be the case if e.g., all the other risks are more likely than AI to be the absolute top risk across people, but many people have AI risk as their second risk, which would suggest a very high ordinal ranking that we can’t see from looking at the distribution of top picks.
How bad is authoritarianism anyways? China and Taiwan’s life satisfaction isn’t that different.
I’m not sure why this is deeply confusing. I don’t think we should be assessing whether or not authoritarian regimes are bad or not based on measures of life satisfaction, and if that is what one wants to do then certainly not contemplating it via a 1v1 comparison of just two countries.
Is the claim that they are not that different on this metric true—where is the source for this and how many alternative sources or similar metrics are there? If true, are all the things that feed into people’s responses to a survey about life satisfaction in these different places the same (how confident are they that they can give their true opinions, and how low have their aspirations or capacity to contemplate a flourishing life become), and are the measures representative of the actual population experience within those countries (what about the satisfaction of people in encampments in China that help sustain the regime and quash dissent)?
Even granted that the ratings really reflect all the same processes going on in each country and that it is representative, Taiwan lives under threat of occupation and invasion, and there are many other differences between the two countries. The case is then just a confounded comparison of 1 country vs 1 other, which is not an especially good comparison of whether the one variable chosen and used to define those countries makes a difference or not.
Thanks for the response and the links to these graphs. This is just a quick look and so could be wrong but looking into some files from the World Values Survey, I find this information which, if correct, would make me think I would not weight this information into my consideration of whether we should be concerned about a country being annexed even to a level of 1% weight. The population of China is ~1.4 billion. The population of Taiwan is ~24 million. The sample size for the Chinese data seems to be 2300 people. And for Taiwan about 1200. I tried to upload a screenshot which I can’t work out how to do, but the numbers are in the doc “WV6 Results By Country v20180912” on this page https://www.worldvaluessurvey.org/WVSDocumentationWV6.jsp
I do not think we can have any faith at all that a sample of 2300 people can even come close to representing all the variation in relevant factors related to happiness or satisfaction across the population of China. The ratio of population to respondents is over 600,000, larger than some estimates for the population of Oslo, Glasgow, Rotterdam etc. (https://worldpopulationreview.com/continents/europe/cities)
I may be missing something or making some basic error there but if it is roughly correct, then I would indeed call it silly to factor in this survey result when deciding what our response should be to the annexation of Taiwan. I do not think that such a question is in principle about life satisfaction/happiness, but even if it were I would not use this information.
Thank you Chris! One would think that years of disappointing cake and pizza apportionment would have taught people that we don’t read things very precisely when they’re circles, but the pie chart remains...