Not on voting directly but relatedly, asking a nationally representative sample about explicit future or present attitudes did not find evidence to support the claim that younger people consider future people as equally deserving of help, though we did find that older people prioritise present people more than younger people do.
Also, see Larks’ quick literature review on psychology research, which suggested “that older people discount the future less than younger people, which might suggest giving their votes more weight.”
I believe this point on social theory was discussed during Mahendra Prasad talk at 2019’s EA Global London (I didn’t attend it but Mahendra sent me the slides). This hypothesis that deliberation could shift individual preferences toward single-peakedness appears to be lent support in deliberative poll experiments (e.g. Farrar et al. 2010). I did not see a neat way to explain this point in the essay, but have included a small mention of it instead. Thanks for offering this useful summary!
Thanks for the useful comments and the recommendation of the Jason Brennan book.
I’ve looked through the evidence on deliberation he cites as being more damning than people realize. He relies mostly on a review from 2002 (Mendelberg). There have been a number of reviews of evidence since then which this essay also draws upon.
Brennan’s main issues also relate not to the main decision-making improvements of concern here (e.g. opinion change and knowledge gain) but mostly about who is included and whether this fosters an engaged civil society. He is also more warm towards Deliberative Polling specifically and acknowledges further research could win him over (page 66 of his book). Arguably we now have a lot more of such evidence.
With further regard to a fairer hearing/study selection I have made a direct reply on Matt_Lerner’s comment/ updated the text to include more counter arguments.
Thanks for that Dal Bó et al. paper recommendation. It doesn’t seem to me to follow that even if politicians are “smarter and better leaders” it should make them better deliberators also. Even if this is true that politicians are likely to be “better” than citizens in this regard, I think the reasons mentioned to be sceptical of proposals for deliberative reforms of existing bodies make this seem somewhat less attractive than citizen bodies that might still be good deliberators.
Thanks for your thoughts!
Cross-country comparisons & reverse causation:
I agree that the cross country comparisons do not offer much causal inference and as noted in the piece would be an interesting area to find more or do more research in.
The file-drawer effect:
With regard to your and other commenters points on study selection and giving the other side a fair hearing, I have made some updates throughout the text, but especially in the “Impact of deliberation” preamble to emphasise some counterpoints or mixed findings in the literature, in addition to the already existing section on “Reasons to doubt deliberative mini publics”. Even after clarifying some uncertainty around the effects of deliberation and underscoring the need for more high-quality research in this area, we think the existing evidence base for deliberative reforms compares favourably with other interventions of this sort. For me, a greater concern is around institutionalising deliberation so that the effects have direct impact.
Enacting deliberative democracy on a large scale:
I am very hesitant to make a proposal for “enacting deliberative democracy on a large scale somewhere besides China” because as noted in the piece deliberative democracy is a much larger concept of a macro-political system consisting of various sites of deliberation compared to more limited democratic (or undemocratic) acts of deliberation . To me, the former is not obviously the best or most tractable proposal since it could involve changing the entire political system and ecosystem of institutions. There are possibilities for small scale deliberation to inform the larger system (without having to have mass deliberation) e.g. small scale deliberative councils or polling across a polity can offer advice as in the case of AmericaSpeaks, or inform the wider electorate as in the case of Citizens Initiative Reviews, and of course one could consider variants on Rupert Read’s Guardians of the future scheme, where a small number deliberate and then either advise or have an array of different powers to influence the legislature (or to influence the wider public through their oversight). These do not seem to be any harder to implement than other proposals such as approval voting or age-weighted voting, and have the pro of having already been adopted in a number of polities.
In addition to the analyses SoGive conducted that Sanjay has listed, Rethink Priorities conducted some tests which provide further evidence for the claims made, and speak to the question Will MacAskill raised “Do younger people actually have more future-oriented views?” The samples do appear to be more presentdayist than longtermist, especially so for older respondents.
In both samples we find evidence in support of the hypothesis that there is a preference for prioritising helping people now rather than considering people in the future as of equal priority. 47-61% of respondents place a greater importance on prioritising present people than they did on treating future people as equal, while 18-34% preferred treating future people as equal more than they preferred prioritising present people. The rest placed the same importance on both.
Wilcoxon matched-pairs signed-ranks test & Sign test of matched pairs find data in both samples that would be surprising given a null of no difference in the distributions or between the medians of Prioritise Present and Future Equal. Therefore, it seems pretty safe to reject the null with a 5% long run error rate, and be able to reliably detect a small effect size 80% of the time if it exists in reality. One-sided tests suggest we can reject the null hypotheses that there is no difference and that Future Equal has a larger proportion than Prioritise Present. But we cannot reject that Prioritise Present has a larger proportion than Future Equal.
We do not find evidence to support the claim that younger people consider future people as equally deserving of help, though we do find that older respondents prioritise present people more than younger respondents do. Older people are more likely to always prioritise present people than choose any number of future people to help, though younger people only seem willing to choose future people when there are more people in the future than the present to be helped.
We did not find any significant correlation between age and the explicit question of treating future people as equal. We found a small positive correlation (Spearman, Šidák-adjusted, r=0.17, p<0.01) between age and the explicit prioritise present question, and a regression including Future Equal, Left-Right, Age, and Mean Charity Likeliness suggests older respondents were more likely than younger respondents to Agree/Strongly Agree to prioritising helping people now, though as likely as younger respondents to Slightly Disagree/Disagree/Strongly Disagree.
Of those who gave an integer in response to the tradeoff question asking respondents to offer a number of future lives to improve instead of 1000 present lives, there were very few young respondents who gave a value of less than 1001. Very few (~7%) respondents under the age of 35 gave an answer less than 1001, while ~26% of 35s and over gave an answer less than 1001. None of the under 35s gave a response lower than one hundred, while 44% of the 35 and overs did—with 14% giving a value of 1. It is hard to know if these respondents really preferred improving the life of 1 person 500 years from now rather than 1000 people now or if they answered incorrectly.
Finally, a multinomial logistic regression (of Always Present as base, plus Always Future, 1001 or more, & Less than 1001) suggests that increases in age are negatively associated (-0.03, p<0.0001) with choosing an integer rather than Always Present i.e. older people are more likely to choose always improving lives of present people no matter the number of future lives improved than to offer any number of future people it would be better to improve.
Thank you Gregory for the very constructive criticism, it strikes me as one of the most useful types of comments a post can receive, and is good for me personally as a researcher.
“Misuse of Chi^2”
That is a very fair critique of the use of Chi^2 here. I have replaced the Chi^2 tests with K-W tests where appropriate and made a comment in the “updates and corrections” section noting this. Replacing the Chi^2 tests as K-Ws did not change any of our results in any of the sections except politics (which became non-significant). Looking into the change in the politics finding would require more work at this stage to drill down into more detail, and the regression results presented later suggest doing this might not be of much added value.
My intention in each sub-section was to report whether there was any significant relationship (using the inappropriate Chi^2 test) or use inferential style language in the cases where I used t-tests. In cases where I had not found a relationship (e.g., First Heard of EA). I used language to that effect “These differences are neither significant nor very substantial”. In the specific case of age that you mention I mistakenly diverged from this intended style by not using either a reference to a significance test or language to that effect. I have added the K-W test to this section. Certainly, more can be done to ensure the style is more consistent and does not mislead the reader.
“The ordered regression”
You’re right that the discussion of the regression was insufficient. I wanted to include the regression in the post because, as you mentioned, regression analysis can do a lot to clarify these relationships. But I decided to keep the discussion short because the regression seemed to offer very limited practical significance (as you pointed out). Had I decided to give it more weight in my analysis then it certainly would be appropriate to offer a fuller explanation. Nonetheless, I should have been clearer about the limited usefulness of the regression, and noted it as the reason for the short discussion.
Regardless, here’s a more detailed explanation:
Variables in the model (and piece in general) were chosen based on cleavages in EA we have found in previous posts to explore how they might differ in terms of welcomeness. “Top Priority” was a separate model because so many respondents either did not give a top priority or gave many and thus were excluded. It was disappointing that the factors in the survey data explained so little of the variation.Nevertheless, I thought it would still be of interest to see that the major themes we have been discussing in the survey series so far don’t seem to be very important on this measure.
The line regarding political spectrum does indeed appear to be a mistake so I have removed it and stated something to this effect in the “updates and corrections” section.
For simplicity, Country and Top Priority Cause were each presented as a variable where the most popular response was compared to all the others combined. These were the USA and Global Poverty, though the table and discussion should have been more explicit about this, and has been updated accordingly. Country was categorised into the top countries by number of responses; USA, UK, Germany, Australia, Canada, Switzerland, Sweden, Netherlands, and “other”. The initial significance we noted in both of these categories was in comparison only to the most popular response; those prioritising AI Risk and Meta Charities appeared significantly more likely to view EA as more welcoming compared to Global Poverty, and those EAs from Australia and Canada appeared significantly more likely to view EA as more welcoming compared to American EAs. However, it would have been more appropriate to model each country as a dummy variable also, which has been done in the regression table linked to here. Due to how our previous phrasing of this result could be misinterpreted, we have decided to to de-emphasise this conclusion.
As you point out, measuring the EA-related sentiment among potential EAs and/or people who left EA was unfortunately impossible with the main survey and would require actively reaching out to these highly dispersed groups. There was no intention in this post to argue how good the movement is at welcoming people into EA overall, although some may attempt to do so based on the results presented here and so it is wise to add caveats about the limits to doing so. I think your suggestion of focusing more on population sizes relative to a baseline (where possible to establish) is a great idea as a first step in moving in that direction. If this were the aim of the post then certainly the results presented here do little to accomplish that goal. Instead, we could only look at how welcoming people already in EA think it is, the results of which I don’t think are “all but uninterpretable”.There do seem to be meaningful differences in welcomeness perceptions within our sample that still seem worth talking about, even if we can’t see the differences outside our sample. If we think the differences in perceived welcomeness are predictors of dropping out of EA, then these findings might hint at factors that influence retention. Again, our data do not allow us to make these inferences about retention but could be useful signposts for further analyses to explore how community perceptions of welcomeness may affect EA retention.
In fact, we debated internally whether to publish this piece at all due to concerns of selection bias and we were unsure what conclusions we could actually draw. We ultimately went ahead with publishing it, though with the decision to not make any specific recommendations. Even still, I can see how we ended up overstating what can be concluded from this data. I certainly share your concern that any “policy” devised simply by looking at the results presented here would almost certainly miss the mark. It was not the intention here to make policy suggestions on how to make EA more welcoming (though there is a sentence in the Local Groups section that does slide in that direction), as clearly a lot more information is needed from former or potential-but-non-EAs.
Once again, many thanks for your thoughtful comments and suggestions.
Thanks! Glad you enjoyed the post.
That data certainly does exist as we discuss in a previous post:
The low N in many of the categories of those characteristics makes any inference difficult, though there are no apparent differences in welcomeness along lines of race, education, or religion.
There was no specific question in this year’s survey following up on reasons for welcomeness ratings.
Glad you enjoyed the post, we have a few more supplementary posts coming out.In the North America-only map there is a large circle over a wide “Bay area” (approx. Redding in the top & Santa Maria in the bottom of the circle), with a small point on San Francisco itself near the centre. In the World Map there is a small point on San Francisco itself, and medium circles over LA/Southern California which are all encompassed within a larger “West Coast” circle.
I was wondering how people calculated “expected fun-hours”. I was considering an alternative approach such as considering how many hours of sleep would I trade for one more hour of this activity (of course there may be some point at which hours of sleep given away impedes your ability to do the activity!), or maybe how many more hours of your least favourite activity would you do to gain one extra hour of this “fun” activity.
And as an another example, similar to sharing steam or Netflix accounts I think sharing/borrowing physical items from friends/colleagues is also useful. I’ve borrowed a friend’s mountain bike & surfboard and have lent him my road bike. This may mean that one values the activity without the friend more than some other activity with the friend at a time when they are free.