Maybe I didn’t understand it properly, but I guess there’s something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it’s 32.Depending on your population ethics, one may think “we should improve the prospects in poor countries”, but others can say “we should have more chimps.” Or this scale has serious problems for comparisons between different species.
Hey Ramiro and Thomas,
Thanks for your engagement with this system. I think in general our system has lots of room for improvement—we are in fact working on refining it right now. However, I am pretty strongly in favor of having evaluation systems even if the numbers are not based on all the data we would like them to be or even if they come to surprising results.
Cross species comparison is of course very complex when it comes to welfare. Some factors are fairly easy to measure across species (such as death rates) while others are much more difficult (diseases rates are a good example of where it’s hard to find good data for wild animals). I can imagine researchers coming to different conclusions given the same initial data.
It’s worth underlining that our system does not aim to evaluate the moral weight of a given species, but merely to assess a plausible state of welfare. (Thomas: this would be one caveat to add when sharing.) In regards to moral weight (e.g. what moral weight do we accord a honey bee relative to a chicken etc.) – that is not really covered by our system. We included the estimates of probability of consciousness per Open Phil’s and Rethink Priorities’ reports on the subject, but the moral weight of conscious human and non-human animals is a heavily debated topic that the system does not go into. Generally I recommend Rethink Priorities’ work on the subject. In regards to welfare, I think it’s conceptually possible that e.g. a well treated pet dog in a happy family may be happier and their life more positive than a prisoner in a North Korean concentration camp. This may seem unintuitive, but I also find the inverse conclusion unintuitive. As mentioned above, that doesn’t mean that we should be prioritizing our efforts on improving the welfare of pet dogs vs. humans in North Korea. Prioritizing between different species is a complex issue, of which welfare comparisons like this index may form one facet without being the only tool we use.
To cover some of the specific claims.
- Generally, I think there is some confusion here between the species having control vs the individual. For example, North Korea as a country has a very high level of control over their environment, and can shape it dramatically more than a tribe of chimps can. However, each individual in North Korea has extremely limited personal control over their life – often having less free time and less scope for action than a wild chimp would practically (due to the constraints of the political regime) if not theoretically (given humanity’s capabilities as a species).
- We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
- Humanity has indeed spent a great deal more on diagnosing humans than chimps. However, there is some data on health that is comparable, particularly when it comes to issues that are clearer to observe such as physical disability.
- There is in fact some research on hunger and malnutrition in wild chimps, so this was not based on intuitions but on best estimates of primatologists. Malnourishment in chimps can be measured in some similar ways to human malnourishment, e.g. stunting of growth. I do think you’re right that concerns with unsafe drinking water could be factored into the disease category instead of the thirst one.
I would be keen for more research to be done on this topic but I would expect it to take a few hours of research into chimp welfare and a decent amount of research into human welfare to get a stronger sense than our reports currently offer. I think these sorts of issues are worth thinking about and we would like to see more research being done using such a system that aims to evaluate and compare the welfare of different species. Thank you again for engaging with the system—we’ll bear your comments in mind as we work on improvements.
Thanks for this clarifying comment. I see your point—and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa—at least at the same time.Really, I’m more puzzled than anything else—and also surprised that I haven’t seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn’t change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841) and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)
Finally, really, please, don’t take this as a criticism (I’m a major fan of CE), but:
We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it’d be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn’t know how to compare the hypothetical welfare of the world’s super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn’t responded yet. Here’s the email I sent him:
Someone posted an article you co-authored in 2018 in the Stanford Arete Fellowship mentors group, and the conclusion that wild chimps had a higher welfare score than humans in India seemed off to me. I had the intuition that chimps can control their environment less well than human hunter-gatherers, plus have a less egalitarian social structure, plus the huge amount of infrastructure in food. This seemed like it could reveal either a surprising truth, or a methodological flaw or biases in the evaluators; I read through the full report and have some thoughts which I hope are constructive.
- The way humans are compared to non-humans seems too superficial. I think 6 points to humans in India vs 9 points in wild chimpanzees based on the high level of diagnosed disability among people in India is misleading, because we’ve spent billions more on diagnosing human diseases than chimps.- Giving 0 points to humans in India for thirst/hunger/malnutrition, while chimps get 11, seems absurd for similar reasons. If we put as much effort into the diet of chimps as in the diets of wealthy humans to get a true reference point for health, I wouldn’t be surprised if more than 15% of chimps were considered malnourished. Also, the untreated drinking water consumed in India is used to support this rating, but though untreated water causes harm through disease, it shouldn’t be in the “thirst/hunger/malnutrition” category. [name of mentor] from the chat sums this up as there not being a ‘wealthy industrialized chimps’ group to contrast with.
I’m wondering if you see these as important criticisms. Do you still endorse the overall results of the report enough that you think we should share it with mentees, and if so, should we add caveats?
Thanks. I’m glad to see I wasn’t profoundly misunderstanding it. Now, I think this is a very important issue: either there’s something really wrong with Charity Entreneurship assessment of welfare in different species, or I will really have to rethink my priorities ;)
When you post a chart like this, I recommend linking to the source. Thomas linked to a blog post below, but this was also posted on the Forum. The initial comment touches on your concern, but I don’t think explains CE’s beliefs fully.
True, thanks.I inserted a link to the CE’s webpage on the Weighted Factor Model
Shouldn’t we have more EA editors in Philpapers categories?
Philpapers is this huge index/community of academic philosophers and texts. It’s a good place to start researching a topic. Part of the work is done by voluntary editors and assistants, who assume the responsibility of categorizing and including relevant bibliography; in exchange, they are constantly in touch with the corresponding subject. Some EAs are responsible for their corresponding fields; however, I noticed that some relevant EA-related categories currently have no editor (e.g.: Impact of Artificial Intelligence). I wonder: wouldn’t it be useful if EAs assumed these positions?
I’m not familiar with academic philosophy/how Philpapers is typically used. Can you say more about what you’d expect the positive outcome(s) to be if EAs volunteer to help out? I can imagine that this might improve the quality of papers on EA-adjacent topics, but your mention of volunteers always being up-to-date on the literature makes me wonder if you’re also thinking of beneficial learning for the volunteers themselves.
I’m thinking on both: adequately categorizing papers may have an indirect impact on how other scholars select their bibliographical references; and the volunteer editors themselves may acquire (or anticipate its acquisition—I suppose that, if a paper is really good, you’ll likely end up finding it anyway) knowledge of their corresponding domains.
Of course, perhaps the answer is “it’s already hard enough to catch up with the posts on such-and-such subjects in the EA and rationalist community, and read the standard literature, and do original work, etc. - and you still want me to work as a quasi-librarian for free?”
This suggestion is worth posting in other places. You could consider emailing places like Forethought or FHI that have a lot of philosophers, or posting in FB groups like “EA Fundamental Research” or “EA Volunteering”.
Too bad I don’t have a Facebook account anymore… I’d appreciate if someone else (whou found it useful, of course) could raise this subject in those groups.
(man, do I miss the memes!)
Or I could just post it as a Question in this forum, to get more visibility.
Why don’t we have more advices / mentions about donating through a last will—like Effective Legacy? Is it too obvious? Or absurd?
All other cases of someone discussing charity & wills were about the dilemma “give now vs. (invest) post mortem”. But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you’re not willing to sacrifice a portion of your consumption for the sake of the greater good, why not those savings for retirement, in case you die before spending it all?
Of course, I’m not saying this would be super-effective; but it might be a low-hanging fruit. Has anyone explored this “path”?
I agree with you that this is an important area. I wrote a whole essay on the technical aspects of planned giving. https://medium.com/@aaronhamlin/planned-giving-for-everyone-15b9baf88632
I have some more related essays here: https://www.aaronhamlin.com/articles/#philanthropy
Thanks. Your post strengthened my conviction that EAs should think about the subject—of course, the optimal strategy may vary a lot according to one’s age, wealth, country, personal plans, etc.
But I still wonder: a) would similar arguments convince non-EA people? b) why don’t EA (even pledgers) do something like that (i.e., take their deaths into account)? Or If they do it “discretely”, why don’t they talk about it? (I know most people don’t think too much about what is gonna happen if they die, but EAs are kinda different)
(I greatly admire your work, btw)
I’m aware of many people in EA who have done some amount of legacy planning. Ideally, the number would be “100%”, but this sort of thing does take time which might not be worthwhile for many people in the community given their levels of health and wealth.
I used this Charity Science page to put together a will, which I’ve left in the care of my spouse (though my parents are also signatories).
Why don’t we have an “Effective App”?
See, e.g., Ribon—an app that gives you points (“ribons”) for reading positive news (e.g. “handicapped walks again thanks to exoskeleton”) sponsored by corporations; then you choose one of the TLYCS charities, and your points are converted into a donation.
Ribon is a Brazilian for-profit; they claim to donate 70% of what they receive from sponsors, but I haven’t found precise stats. It has skyrocketed this year: from their informed impact, I estimate they have donated about U$ 33k to TLYCS – which is a lot for Brazilian standards. They intend to expand (they gathered more than R$ 1 mi – roughly U$250k—from investors this year) and will soon launch an ICO. Perhaps an EA non-profit could do even more good?
I’d never heard of this app before—thanks for bringing it to my attention!
The most prominent “EA donation” app I’m aware of is Momentum, which has multiple full-time employees and seems to be pushing hard to get American users. I don’t know what their user acquisition numbers are like thus far.
I love Momentum—to me, it’s like a kind of cosmic pigouvian tax (“someone has to pay when Trump tweets, and this time it’s gonna be me”); it still demands some kind of committment, though. Ribon is completely different, it’s not an app that only altruistic people use; actually, that’s why I didn’t really like it at first, because it didn’t ask people to give anything or to be effective… but then, perhaps that’s why it scales well—particularly in societies without an altruistic culture. It’s a low-hanging fruit: we already see lots of ads on the internet, for free, and usually (most) don’t read but the headlines of news like “Shelly-Ann breaks a new record”… so why not game it all a little bit (you have points, can gain “badges”, compete with your friends...) and make companies pay for your attention (ads) in donations?
The Life You Can Save is working with an app-development company called Meepo (which is doing pro bono work) to build a non-profit donation app, which is currently in beta. You can learn more about this project, and how to download the beta version, here.
Should donations be counter-cyclical? At least as a “matter of when” (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don’t think patient philanthropists should “give now instead of later” just because of that (we’ll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?
One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.
I don’t think this is a big concern. When people say “timing the market” they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)
Can Longtermists “profit” from short-term bias?
We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.
I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.
On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it’s true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy’s nuke insurance could benefit from this bias, too.
I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…
Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?
(One possibility is that we can’t really do that—this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)
Philosophers and economists seem to disagree about the marginalist/arbitrage argument that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there’s any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?
Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers’s recommendation on this three years ago: https://obamawhitehouse.archives.gov/sites/default/files/page/files/201701_cea_discounting_issue_brief.pdf
While the “risk-free” interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically any normal public project.
Assessing the impact of Brazilian donors and EA community
We’re thinking about testing if our actions for promoting EA in this year (translations, meetings, networking...) have led to an observable increase in donations from Brazil—particularly outside the group of more “engaged” members. Even if we haven’t observed an increase in high-quality engagement (such as GWWC pledges), we do see an increase in some “cheaper signals”, such as the number of Facebook group members and the amount of donations to AMF (which, curiously, are concentrated in basically two metropolitan areas—Sao Paulo and Porto Alegre; I know there are some EAs living in Minas and in the North, but currently I’m not aware of any donation coming from Rio and Brasilia, despite them being high-income metropolitan area). We’d like to test if that’s a coincidence.
I would appreciate any suggestion/help on that. I think it would demand more than EA survey data. First, we thought about requesting to EA charities data about the amount of donations:
1.1 from Brazil between Oct 23th, 2018 and Oct 23th, 2019 (controlling for month) with the amount of donations from the previous year;
1.2 from similar countries (I’m not sure which countries we should pick: Argentina, Chile, Mexico, S. Africa, Portugal?...China?), in the same periods – to check if any of them presented a similar increase/decrease.
Second, I wonder if we could get in touch with at least some identified donors and ask them how they came to the decision of donating. Possibly, tracking people using the names they provided to those websites might be considered too invasive, but I wonder if the organization itself could send an e-mail inviting them to get in touch with us.
I think that Point 1 will be difficult to test in this way. What you want to do sounds a bit like a regression discontinuity analysis, but (as I understand it) there isn’t really a sharp time point for when you started promoting EA more; the translations/meetings etc. increased steadily since Oct 2018, right? I think this will make it harder to see the effect during the first year that you are scaling up outreach (particularly if compared by month, as there is probably seasonal variation in both donation and outreach). Brazil has also had a fairly distinct set of news worthy events (i.e. election and major political change, arrest of two former presidents during ongoing corruption scandals, amazon fires, etc.) over the same time period you increased outreach. If these events influence donation behaviour, then comparisons to other countries might not be particularly relevant (and it further complicates your monthly comparison). I think a better way to try and observe a quantitative effect would be if you compare the total donations for three years: pre-Oct 2018, Oct 2018-Oct 2019, post-Oct 2019 (provided you keep your level of outreach similar for the next year, and are patient). Aggregating over year will remove the seasonal effect of donations and some of the effect of current events, and if this shows an increase for 2019-2020, then you could (cautiously) look at comparing the monthly donation behaviour (three years of data will be better to compensate for monthly variation).
At this point, I think tracking your impact more subjectively by using questionnaires and interviews would produce more useful information. Not sure if charities would link their donors to you (maybe getting the contact of Brazilians who report donating in the EA survey would be more likely), but you could also try adding a annual questionnaire link to your newsletter/facebook/site like 80,000 hours does. I’d specifically try to ask people who made their first donations, or who increased their donations, this year what motivated them to do so.
I just answered to UNESCO Public Online Consultation on the draft of a Recommendation on AI Ethics—it was longer and more complex than I thought.
I’d really love to know what other EA’s think of it. I’m very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it’s the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address “long-term catastrophic harms”), I haven’t seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.
Is ‘donations as gifts’ neglected?
I enjoy sending ‘donations as gifts’ - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn’t actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.
I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven’t seen any remark or analysis about it (well, maybe I’m just wasting my time: only one friend of mine stated he enjoyed his gift, but I don’t think he has ever done it himself), and many organizations don’t provide an accessible tool to do this.
P.S.: BTW, my birthday is on May 14th, so if anyone wants to send me one of these “gifts”, I’d rather have you donating to GCRI.
I don’t know what you mean by ‘neglected’. I know a lot of people who say they want this and a similar number who are deeply offended by the concept. (Personally, I’m against the idea of giving charitable donations to my favourite charity as a gift, although I’d consider a donation to the recipient’s favourite charity.)
Thanks. Maybe it’s just my blindspot. I couldn’t find anyone discussing this for more than 5min, except for this one. I googled it and found some blogs that are not about what I have in mind
I agree that donating to my favourite charity instead of my friend’s favorite one would be unpolite, at least; however, I was thinking about friends who are not EAs, or who don’t use to donate at all. It might be a better gift than a card or a lame souvenir, and perhaps interest this friend in EA charities (I try to think about which charity would interest this person most). Is there any reason against it?
If your friend doesn’t donate normally, then probably their preferred person to spend money on is themself. It still seems rude to me to say you’re giving them a gift, which should be something they want, and instead give them something they don’t want.
For example, my mother likes flowers. I normally get her flowers for mother’s day. If I switch to giving her a donation to AMF instead of buying her flowers, she will be counterfactually worse off—she is no longer getting the flowers she enjoys. I don’t think that kind of experience would make her more likely to start donating, either.
Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn’t been leaked yet, and I didn’t see anything in EA community—maybe my bubble is too small.
Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals?
(Epistemic status: low, but I didin’t find any research on that, so the hypothesis deserves a bit more of attention)
1. Nosocomial infections are serious business. Hospitals are basically big buildings full of dying people and the stressed personel who goes from one bed to another try to avoid it. Throw a deadly and very contagious virus in it, and it becomes a slaughterhouse.
2. Previous coronavirus were rapidly spread in hospitals and other care units. That made South East Asia kinda prepared for possibly similar epidemics (maybe I’m wrong, but in news their medical staff is always in Hazmat suits, unlike most Health workers in the West). Maybe this is a neglected point in the successful approach in South East Asia?
3. I know hospitals have serious protocols to avoid it… but it takes only a few careless cleaning staff, or a patient’s relatives going to cafeteria, or a badly designed airflow, to ruin everything. Just one Hospital chain in Brazil concentrates most of deaths in Sao Paulo, and 40% of the total national.
Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight—yet, I didn’t even mention it above. Some countries report almost half of the deaths from those environments.
(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it’s implausible, but has anyone tested if this isn’t just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)
So, I saw Vox’s article on how air filters create huge educational gains; I’m particularly surprised that indoor air quality (actually, indoor environmental conditions) is kinda neglected everywhere (except, maybe, in dagerous jobs). But then I saw this (convincing) critique of the underlying paper.
It seems to me that this is a suitable case for a blind RCT: you could install fake air filters in order to control for placebo effects, etc. But then I googled a little bit… and I haven’t found significant studies using blind RCTs in social sciences and similar cases. I wonder why; at least for these cases, it doesn’t seem more unethical or harder to do it than in medical trials.
‘Good’ news: as expected, as real interest rates fall, so do SDR, increasing the social cost of carbon. (not novelty, ok, but monetary policy-makers explicitly acknowledging it seems to be good)Bad news: of course, it still seems to be higher than a normative SDR based on time-neutrality.
Legal personality & AI systems
From the first draft of the UNESCO Recommendation on AI Ethics:
Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of “AI DAO”—i.e., from creating a legal entity totally implemented by an autonomous system. This doesn’t seem reasonable; after all, what is company if not some sort of artificial agent?
Why didn’t we have more previous alarm concerning the spread of Covid through care and nursing homes? Would it have made any difference?
Does anyone know or have a serious opinion / analysis on the European campaign to tax meat? I read some news at Le Monde, but nothing EA-level seriousness. I mean, it seems a pretty good idea, but I saw no data on possible impact, probability of adoption, possible ways to contribute, or even possible side-effects?
(not the best comparison, but worth noting: in Brazil a surge in meat prices caused an inflation peak in december and corroded the governement’s support—yeah, people can tolerate politicians meddling with criminals and fascism, as long as they can have barbecue)
I was reading this recommended book and wondering how much of the late changes in our world is due to the demographic transitions—i.e., boomers. We know the population pyramid shape affects unemployment rates, wealth concentration (morever, think about how income predicts life expectancy, at least in very unequal countries—so one can expect a higher proportion of wealthier individuals in old age), and maybe even increasing health costs and votes—e.g., I just confirmed that, in Brazil, opinions about the government among young and old people are symmetrically opposite.
Idk what to infer from here. It seems to me there’s an elephant in the room: I read a lot about economics, philosophy and politics, and I’ve seen almost no mention of it but for discussions over one of those topics alone—never something concerning all of them. But I do think this should interest EAs, because much of our economic and political theory fails to account for an aging population—something quite remarkable in human history. So, I’d appreciate any tip to read something that takes demography seriously (except for Peter Turchin, whom I already follow).