Maybe I didn’t understand it properly, but I guess there’s something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it’s 32. Depending on your population ethics, one may think “we should improve the prospects in poor countries”, but others can say “we should have more chimps.” Or this scale has serious problems for comparisons between different species.
Thanks for your engagement with this system. I think in general our system has lots of room for improvement—we are in fact working on refining it right now. However, I am pretty strongly in favor of having evaluation systems even if the numbers are not based on all the data we would like them to be or even if they come to surprising results.
Cross species comparison is of course very complex when it comes to welfare. Some factors are fairly easy to measure across species (such as death rates) while others are much more difficult (diseases rates are a good example of where it’s hard to find good data for wild animals). I can imagine researchers coming to different conclusions given the same initial data.
It’s worth underlining that our system does not aim to evaluate the moral weight of a given species, but merely to assess a plausible state of welfare. (Thomas: this would be one caveat to add when sharing.) In regards to moral weight (e.g. what moral weight do we accord a honey bee relative to a chicken etc.) – that is not really covered by our system. We included the estimates of probability of consciousness per Open Phil’s and Rethink Priorities’ reports on the subject, but the moral weight of conscious human and non-human animals is a heavily debated topic that the system does not go into. Generally I recommend Rethink Priorities’ workon the subject.
In regards to welfare, I think it’s conceptually possible that e.g. a well treated pet dog in a happy family may be happier and their life more positive than a prisoner in a North Korean concentration camp. This may seem unintuitive, but I also find the inverse conclusion unintuitive. As mentioned above, that doesn’t mean that we should be prioritizing our efforts on improving the welfare of pet dogs vs. humans in North Korea. Prioritizing between different species is a complex issue, of which welfare comparisons like this index may form one facet without being the only tool we use.
To cover some of the specific claims.
- Generally, I think there is some confusion here between the species having control vs the individual. For example, North Korea as a country has a very high level of control over their environment, and can shape it dramatically more than a tribe of chimps can. However, each individual in North Korea has extremely limited personal control over their life – often having less free time and less scope for action than a wild chimp would practically (due to the constraints of the political regime) if not theoretically (given humanity’s capabilities as a species).
- We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
- Humanity has indeed spent a great deal more on diagnosing humans than chimps. However, there is some data on health that is comparable, particularly when it comes to issues that are clearer to observe such as physical disability.
- There is in fact some research on hunger and malnutrition in wild chimps, so this was not based on intuitions but on best estimates of primatologists. Malnourishment in chimps can be measured in some similar ways to human malnourishment, e.g. stunting of growth. I do think you’re right that concerns with unsafe drinking water could be factored into the disease category instead of the thirst one.
I would be keen for more research to be done on this topic but I would expect it to take a few hours of research into chimp welfare and a decent amount of research into human welfare to get a stronger sense than our reports currently offer. I think these sorts of issues are worth thinking about and we would like to see more research being done using such a system that aims to evaluate and compare the welfare of different species. Thank you again for engaging with the system—we’ll bear your comments in mind as we work on improvements.
Thanks for this clarifying comment. I see your point—and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa—at least at the same time. Really, I’m more puzzled than anything else—and also surprised that I haven’t seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn’t change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841) and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)
Finally, really, please, don’t take this as a criticism (I’m a major fan of CE), but:
We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it’d be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn’t know how to compare the hypothetical welfare of the world’s super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn’t responded yet. Here’s the email I sent him:
Someone posted an article you co-authored in 2018 in the Stanford Arete Fellowship mentors group, and the conclusion that wild chimps had a higher welfare score than humans in India seemed off to me. I had the intuition that chimps can control their environment less well than human hunter-gatherers, plus have a less egalitarian social structure, plus the huge amount of infrastructure in food. This seemed like it could reveal either a surprising truth, or a methodological flaw or biases in the evaluators; I read through the full report and have some thoughts which I hope are constructive.
- The way humans are compared to non-humans seems too superficial. I think 6 points to humans in India vs 9 points in wild chimpanzees based on the high level of diagnosed disability among people in India is misleading, because we’ve spent billions more on diagnosing human diseases than chimps. - Giving 0 points to humans in India for thirst/hunger/malnutrition, while chimps get 11, seems absurd for similar reasons. If we put as much effort into the diet of chimps as in the diets of wealthy humans to get a true reference point for health, I wouldn’t be surprised if more than 15% of chimps were considered malnourished. Also, the untreated drinking water consumed in India is used to support this rating, but though untreated water causes harm through disease, it shouldn’t be in the “thirst/hunger/malnutrition” category. [name of mentor] from the chat sums this up as there not being a ‘wealthy industrialized chimps’ group to contrast with.
I’m wondering if you see these as important criticisms. Do you still endorse the overall results of the report enough that you think we should share it with mentees, and if so, should we add caveats?
Thanks. I’m glad to see I wasn’t profoundly misunderstanding it. Now, I think this is a very important issue: either there’s something really wrong with Charity Entreneurship assessment of welfare in different species, or I will really have to rethink my priorities ;)
When you post a chart like this, I recommend linking to the source. Thomas linked to a blog post below, but this was also posted on the Forum. The initial comment touches on your concern, but I don’t think explains CE’s beliefs fully.
Just sharing some concerns about live exports (yeah, the transportation of living animals in ships)
I wonder if we could do more about live exports. I would like to know if it’s worse than some other practices in factory farming that we often highlight (like caging hens), but it seems more susceptible of getting support from meat-eaters who consider it cruel and unnecessary. I know the subject has been mentioned en passant in some Forum posts and it’s a subject that may figure in European reforms…
I’m particularly concerned with Brazil, since it’s such a large exporter—but the same applies to Australia, too. At least two organizations (Fórum Nacional de Proteção e Defesa Animal—FNDPA , and Mercy for Animals) working with legal measures to ban the practice in Brazil have received support from EA—btw, one can sign a petition on MfA’s website. But my (perfunctory) knowledge of Brazilian politics and law makes skeptic that this could work without external pressure.
The Effective Thesis Exceptional Research Award (that’s how the website calls it), or High-Potential Award (that’s how it shows up on Google), or maybe just Award (how apparently everyone calls it) is open to submissions up to Sep 2022. (I’m pretty sure there’s a top post coming, but I thought it’d be cool to mention it in shortform right away. Feels like a scoop)
This award has been established to encourage and recognize promising research by students that has the potential to significantly improve the world. [...]
Submissions can consist of theses, dissertations, or capstone papers at the undergraduate or graduate level. Other substantive work forming part of a graduation semester may also be considered. To be eligible, submissions must have been produced in the academic year 2021 − 2022 and relate to one or multiple research directions prioritised by Effective Thesis. See the list of research directions below or see here for more information.
Shouldn’t we have more EA editors in Philpapers categories?
Philpapers is this huge index/community of academic philosophers and texts. It’s a good place to start researching a topic. Part of the work is done by voluntary editors and assistants, who assume the responsibility of categorizing and including relevant bibliography; in exchange, they are constantly in touch with the corresponding subject. Some EAs are responsible for their corresponding fields; however, I noticed that some relevant EA-related categories currently have no editor (e.g.: Impact of Artificial Intelligence). I wonder: wouldn’t it be useful if EAs assumed these positions?
I’m not familiar with academic philosophy/how Philpapers is typically used. Can you say more about what you’d expect the positive outcome(s) to be if EAs volunteer to help out? I can imagine that this might improve the quality of papers on EA-adjacent topics, but your mention of volunteers always being up-to-date on the literature makes me wonder if you’re also thinking of beneficial learning for the volunteers themselves.
I’m thinking on both: adequately categorizing papers may have an indirect impact on how other scholars select their bibliographical references; and the volunteer editors themselves may acquire (or anticipate its acquisition—I suppose that, if a paper is really good, you’ll likely end up finding it anyway) knowledge of their corresponding domains.
Of course, perhaps the answer is “it’s already hard enough to catch up with the posts on such-and-such subjects in the EA and rationalist community, and read the standard literature, and do original work, etc. - and you still want me to work as a quasi-librarian for free?”
This suggestion is worth posting in other places. You could consider emailing places like Forethought or FHI that have a lot of philosophers, or posting in FB groups like “EA Fundamental Research” or “EA Volunteering”.
Too bad I don’t have a Facebook account anymore… I’d appreciate if someone else (whou found it useful, of course) could raise this subject in those groups.
(man, do I miss the memes!)
Or I could just post it as a Question in this forum, to get more visibility.
Why don’t we have more advices / mentions about donating through a last will—like Effective Legacy? Is it too obvious? Or absurd?
All other cases of someone discussing charity & wills were about the dilemma “give now vs. (invest) post mortem”. But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you’re not willing to sacrifice a portion of your consumption for the sake of the greater good, why not those savings for retirement, in case you die before spending it all?
Of course, I’m not saying this would be super-effective; but it might be a low-hanging fruit. Has anyone explored this “path”?
Thanks. Your post strengthened my conviction that EAs should think about the subject—of course, the optimal strategy may vary a lot according to one’s age, wealth, country, personal plans, etc.
But I still wonder: a) would similar arguments convince non-EA people? b) why don’t EA (even pledgers) do something like that (i.e., take their deaths into account)? Or If they do it “discretely”, why don’t they talk about it? (I know most people don’t think too much about what is gonna happen if they die, but EAs are kinda different)
I’m aware of many people in EA who have done some amount of legacy planning. Ideally, the number would be “100%”, but this sort of thing does take time which might not be worthwhile for many people in the community given their levels of health and wealth.
I used this Charity Science page to put together a will, which I’ve left in the care of my spouse (though my parents are also signatories).
See, e.g., Ribon—an app that gives you points (“ribons”) for reading positive news (e.g. “handicapped walks again thanks to exoskeleton”) sponsored by corporations; then you choose one of the TLYCS charities, and your points are converted into a donation.
Ribon is a Brazilian for-profit; they claim to donate 70% of what they receive from sponsors, but I haven’t found precise stats. It has skyrocketed this year: from their informed impact, I estimate they have donated about U$ 33k to TLYCS – which is a lot for Brazilian standards. They intend to expand (they gathered more than R$ 1 mi – roughly U$250k—from investors this year) and will soon launch an ICO. Perhaps an EA non-profit could do even more good?
I’d never heard of this app before—thanks for bringing it to my attention!
The most prominent “EA donation” app I’m aware of is Momentum, which has multiple full-time employees and seems to be pushing hard to get American users. I don’t know what their user acquisition numbers are like thus far.
I love Momentum—to me, it’s like a kind of cosmic pigouvian tax (“someone has to pay when Trump tweets, and this time it’s gonna be me”); it still demands some kind of committment, though. Ribon is completely different, it’s not an app that only altruistic people use; actually, that’s why I didn’t really like it at first, because it didn’t ask people to give anything or to be effective… but then, perhaps that’s why it scales well—particularly in societies without an altruistic culture. It’s a low-hanging fruit: we already see lots of ads on the internet, for free, and usually (most) don’t read but the headlines of news like “Shelly-Ann breaks a new record”… so why not game it all a little bit (you have points, can gain “badges”, compete with your friends...) and make companies pay for your attention (ads) in donations?
The Life You Can Save is working with an app-development company called Meepo (which is doing pro bono work) to build a non-profit donation app, which is currently in beta. You can learn more about this project, and how to download the beta version, here.
(d) some EAs working in consulting firms (EACN) - which, among other things, aim to nudge corporations and co-workers into more effective behavior. But I didn’t find any org providing to non-EA charities consulting services aiming to make them more effective. Would it be low-impact? Or is it a low-hanging fruit?
One might think that this is basically the same job GW already does… Well, yeah, I suppose you would actually use a similar approach to evaluate impact, but it’s very different to provide to a charity recommendations that aim to help them achieve their own goals. This would be framed as assistance, not as some sort of examination; while GW’s stakeholders are donors, this “consulting charity” would work for the charities themselves. Besides, in order to prevent conflicts of interest, corporations often use different firms to provide them auditting (which would be akin to charity evaluation—i.e., a service that ultimately is concerned with investores) and consulting services (which is provided to the corporation and its managers). This could be particularly useful for charities in regions that lack a (effective) charity culture.
Update: an example of this idea is the Philanthropy Advisory Fellowship sponsored by EA Harvard—which has, e.g., made recommendations to Arymax Foundation on the best cause areas to invest in Brazil. But I believe an “EA Consulting” org would provide other services, and not only to funders.
I mean, it’s pretty relevant for peace (I guess most wars result from conflict of factions or succession crises) and for a well functioning government. People talk about the dangers of polarization, about why nations fail, or autoritarianism, or iidm… It’s not neglected per se (it’s been the focus of some of classical works in political phil & sci), but I’m not sure all low-hanging has been eaten; plus, thinking about interventions as increasing / decreasing political stability might help assessing other areas (like IIDM).
I was thinking about Urukagina, the first monarch ever mentioned for his benevolence instead of military prowess. Are there any common traces among them? Should we write something like that Forum post on dark trait rulers—but with opposite sign?
I googled a bit about benevolent kings (I thought it’d provide more insight than looking to XXth century biographies), but, except maybe for enlightened despots, most of the guys (like Suleiman, the magnificent) in these lists are conquerors who just weren’t brutal and were kind law-givers to their people—which you could also say about Napoleon. I was thinking more about guys like Ashoka and Marcus Aurelius, who seem to have despised the hunger for conquests in other people and were actually willing to improve human welfare for moral reasons
An objection to the non-identity problem: shouldn’t disregarding the welfare of non-existent people preclude most interventions on child mortality and education?
One objection against favoring the long-term future is that we don’t have duties towards people who still don’t exist. However, I believe that, when someone presents a claim like that, probably what they want to state is that we should discount future benefits (for some reason), or that we don’t have a duty towards people who will only exist in the far future. But it turns out that such a claim apparently proves too much; it proves that, for instance, we have no obligation to invest on reducing the mortality of infants less than one year old in the next two years
The most effective interventions in saving lives often do so by saving young children. Now, imagine you deploy an intervention similar to those of Against Malaria Foundation—i.e., distributing bednets to reduce contagion. At the beggining, you spend months studying, then preparing, then you go to the field and distribute bednets, and then one or two years later you evaluate how many malaria cases were prevented in comparison to a baseline. It turns out that most cases of averted deaths (and disabilities and years of life gained) correspond to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform today, they will only succeed in enacting it in some years—thus we can expect that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of their positive impact will affect people who are not born yet. If there’s any value in positivel influencing these children, most of it will happen to people who are not yet born)
This means that, at the beggining of this project, most of the impact corresponded to people who didn’t exist yet—so you were under no moral obligation to help them.
It’s also a significant problem for near-term animal welfare work, since the lifespan of broiler chickens is so short, almost certainly any possible current action will only benefit future chickens.
Should donations be counter-cyclical? At least as a “matter of when” (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don’t think patient philanthropists should “give now instead of later” just because of that (we’ll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?
One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.
I don’t think this is a big concern. When people say “timing the market” they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)
We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.
I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.
On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it’s true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy’s nuke insurance could benefit from this bias, too.
I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…
Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?
(One possibility is that we can’t really do that—this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)
Philosophers and economists seem to disagree about the marginalist/arbitrage argument that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there’s any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?
While the “risk-free” interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically any normal public project.
Assessing the impact of Brazilian donors and EA community
We’re thinking about testing if our actions for promoting EA in this year (translations, meetings, networking...) have led to an observable increase in donations from Brazil—particularly outside the group of more “engaged” members. Even if we haven’t observed an increase in high-quality engagement (such as GWWC pledges), we do see an increase in some “cheaper signals”, such as the number of Facebook group members and the amount of donations to AMF (which, curiously, are concentrated in basically two metropolitan areas—Sao Paulo and Porto Alegre; I know there are some EAs living in Minas and in the North, but currently I’m not aware of any donation coming from Rio and Brasilia, despite them being high-income metropolitan area). We’d like to test if that’s a coincidence.
I would appreciate any suggestion/help on that. I think it would demand more than EA survey data. First, we thought about requesting to EA charities data about the amount of donations:
1.1 from Brazil between Oct 23th, 2018 and Oct 23th, 2019 (controlling for month) with the amount of donations from the previous year;
1.2 from similar countries (I’m not sure which countries we should pick: Argentina, Chile, Mexico, S. Africa, Portugal?...China?), in the same periods – to check if any of them presented a similar increase/decrease.
Second, I wonder if we could get in touch with at least some identified donors and ask them how they came to the decision of donating. Possibly, tracking people using the names they provided to those websites might be considered too invasive, but I wonder if the organization itself could send an e-mail inviting them to get in touch with us.
I think that Point 1 will be difficult to test in this way. What you want to do sounds a bit like a regression discontinuity analysis, but (as I understand it) there isn’t really a sharp time point for when you started promoting EA more; the translations/meetings etc. increased steadily since Oct 2018, right? I think this will make it harder to see the effect during the first year that you are scaling up outreach (particularly if compared by month, as there is probably seasonal variation in both donation and outreach). Brazil has also had a fairly distinct set of news worthy events (i.e. election and major political change, arrest of two former presidents during ongoing corruption scandals, amazon fires, etc.) over the same time period you increased outreach. If these events influence donation behaviour, then comparisons to other countries might not be particularly relevant (and it further complicates your monthly comparison). I think a better way to try and observe a quantitative effect would be if you compare the total donations for three years: pre-Oct 2018, Oct 2018-Oct 2019, post-Oct 2019 (provided you keep your level of outreach similar for the next year, and are patient). Aggregating over year will remove the seasonal effect of donations and some of the effect of current events, and if this shows an increase for 2019-2020, then you could (cautiously) look at comparing the monthly donation behaviour (three years of data will be better to compensate for monthly variation).
At this point, I think tracking your impact more subjectively by using questionnaires and interviews would produce more useful information. Not sure if charities would link their donors to you (maybe getting the contact of Brazilians who report donating in the EA survey would be more likely), but you could also try adding a annual questionnaire link to your newsletter/facebook/site like 80,000 hours does. I’d specifically try to ask people who made their first donations, or who increased their donations, this year what motivated them to do so.
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit—maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge.
I was recently reading about the International Panel for Social Progress: https://www.ipsp.org/
I had never heard of it before. Which surprised me, since it’s kind of like the IPCC, but for social progress. I got the impression that it somehow failed—in reaching significant consensus, in influencing policy… but why?
I was Reading about Meghan Sullivan “principle of non-arbitrariness,” and it reminded me Parfit’s argument against subjectivist reasoning in On What Matters… but why are philosophers (well, and people in general) against arbitrariness? I mean, I do agree it’s a tempting intuition, but I’ve never seen (a) a formal enunciation of what counts as arbitrary (is “arbitrary” arbitrary?), and (b) an a priori argument against. Of course, if someone’s preference ordering varies totally randomly, we can’t represent them with a utility function, and perhaps we could accuse them of being inconsistent. But that’s not what philosophers’ examples usually chastise: if one has a predictable preference for eating shrimps only on Friday, or disregards pain only on Thursday, there’s no instability here – you can represent it with a utility function (having time as a dimension).
There isn’t even any a priori feature allowing us to say that is evolutionarily unstable, since this could only be assessed when we look at whom our agent will interact with. Which makes me think that arbitrariness is not a priori at all, of course – it depends on social practices such as “giving reasons” for actions and decisions (i don’t think Parfit would deny that; idk about Sullivan). There might be a thriving community of people who only love shrimp on Friday, for no reason at all; but, if you don’t share this abnormal preference, it might be hard to model their behavior, to cooperate with them—at least, in this example, when it comes gastronomic enterprises. On the other hand, if you can just have a story (even if kinda unbelievable: “it’s a psychosomatic allergy”) to explain this preference, it’s ok: you’re just another peculiar human. I can understand you now; your explanation works as a salience that allows me to better predict your behavior.
I suspect many philosophical (a priori-like) intuitions depend on things like Schelling points (i.e., the problem of finding salient solutions for social practices people can converge to) than most philosophers would admit. Of course, late Wittgenstein scholars are OK with that, since for them everything is about forms of life, language games, etc. But I think relativistic / conventionalist philosophers unduly trivialize this feature, and so neglect an important point: whatever counts as arbitrary is not, well, arbitrary – and we can often demonstrate that what we call “arbitrary” is suboptimal, inconsistent with other preferences or intuitions, or hard to communicate (and so a poor candidate for a social norm / convention / intuition).
The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.
Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.
What I miss when I read about the morality of discounting is a disanalogy that explains why hyperbolic or exponential discount rates might be reasonable for individuals with limited lifespans and such and such opportunity costs, but not for intertemporal collective decision-making. Then we could understand why pure discount is tempting, and maybe even realize there’s something that temporal impartiality doesn’t capture. If there’s any literature about it, I’d like to know. Please, not the basic heuristics & bias stuff—I did my homework.
For instance, if human welfare was something that could grow like compound interests, it’d make sense to talk about pure exponential discount. If you could guarantee that all of the dead in the battle of Marathon would have, in expectancy, added good to the overall happiness (or whatever you use as a goal function) in the world and transmitted it to their descendants, then you could say that those deaths are a greater evil than the millions of casualties in WW2; you could think of that welfare as “investment” instead of “consumption”. But that’s implausible.
On the other hand, there’s a small grain of truth here: a tragedy happening in the past will reverberate longer in the world historical trajectory. That’s just causality + temporal asymmetry.
This makes me think about cluelessness… I do have a tendency to think good facts have a tendency to lead to better consequences, in general; you don’t have to be an opmitist about it: bad facts just tend to lead to worse consequences, too. The opposite thesis, that a good/bad fact is as likely to cause good as evil, seems quite implausible. So you might be able to think about goodness as investment a little bit; instead of pure discount, maybe we should have something like a proxy for “relative impact in world trajectories”?
I just answered to UNESCO Public Online Consultation on the draft of a Recommendation on AI Ethics—it was longer and more complex than I thought.
I’d really love to know what other EA’s think of it. I’m very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it’s the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address “long-term catastrophic harms”), I haven’t seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.
I enjoy sending ‘donations as gifts’ - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn’t actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.
I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven’t seen any remark or analysis about it (well, maybe I’m just wasting my time: only one friend of mine stated he enjoyed his gift, but I don’t think he has ever done it himself), and many organizations don’t provide an accessible tool to do this.
P.S.: BTW, my birthday is on May 14th, so if anyone wants to send me one of these “gifts”, I’d rather have you donating to GCRI.
I don’t know what you mean by ‘neglected’. I know a lot of people who say they want this and a similar number who are deeply offended by the concept. (Personally, I’m against the idea of giving charitable donations to my favourite charity as a gift, although I’d consider a donation to the recipient’s favourite charity.)
Thanks. Maybe it’s just my blindspot. I couldn’t find anyone discussing this for more than 5min, except for this one. I googled it and found some blogs that are not about what I have in mind
I agree that donating to my favourite charity instead of my friend’s favorite one would be unpolite, at least; however, I was thinking about friends who are not EAs, or who don’t use to donate at all. It might be a better gift than a card or a lame souvenir, and perhaps interest this friend in EA charities (I try to think about which charity would interest this person most). Is there any reason against it?
If your friend doesn’t donate normally, then probably their preferred person to spend money on is themself. It still seems rude to me to say you’re giving them a gift, which should be something they want, and instead give them something they don’t want.
For example, my mother likes flowers. I normally get her flowers for mother’s day. If I switch to giving her a donation to AMF instead of buying her flowers, she will be counterfactually worse off—she is no longer getting the flowers she enjoys. I don’t think that kind of experience would make her more likely to start donating, either.
Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn’t been leaked yet, and I didn’t see anything in EA community—maybe my bubble is too small.
https://en.unesco.org/artificial-intelligence
So, I saw Vox’s article on how air filters create huge educational gains; I’m particularly surprised that indoor air quality (actually, indoor environmental conditions) is kinda neglected everywhere (except, maybe, in dagerous jobs). But then I saw this (convincing) critique of the underlying paper.
It seems to me that this is a suitable case for a blind RCT: you could install fake air filters in order to control for placebo effects, etc. But then I googled a little bit… and I haven’t found significant studies using blind RCTs in social sciences and similar cases. I wonder why; at least for these cases, it doesn’t seem more unethical or harder to do it than in medical trials.
I was thinking about the EA criticism contest… did anyone submit something like “FTX”? Then give that person a prize! Forecaster of the year!
And second place for the best entries talking about accountability and governance.
If not… then maybe it’s interesting to highlight: all of those “critiques” didn’t foresee the main risks that materialized in the community this year. Maybe if we had framed it as a forecasting contest instead… and yet, we have many remarkable forecasters around, and apparently none of them suggested it was dangerous to place so much faith on one person.
Or it’s just a matter of attention. So I ask: what is the most impactful negative event that will happen to EA community in 2023?
The Effective Altruism movement is not above conflicts of interest
Summary
Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism.
By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruim community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.
In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm…).
In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.
I spent SO much time trying to find this entry after the FTX news broke. It didn’t forecast FTX fraud, but it has still absolutely been elevated by recent events. You should re-up this on the forum to see if more people will engage with it now.
This post talking about the risks of FTX aged very well, although it wasn’t part of the contest. It was fairly ignored at the time, but I did agree with it and posted so in the comments.
I expect EA to get cautious around financial stuff for a while (hopefully), so another scandal would come from somewhere else. Perhaps a prominent figure will be exposed as an abuser of some kind?
We know that the track record of pundits is terrible, but many international consultancy firms have been publishing annual “global risks reports” like the WEF’s, where they list the main global risks (e.g. top 10) for a certain period (e.g., 2y). Well, I was wondering if someone has measured their consistency; I mean, I suppose that if you publish in 2018 a list of the top 10 risks for 2019 & 2020, you should expect many of the same risks to show up in your 2019 report (i.e., if you are a reliable predictor, risks in report y should appear in report y+1). Hasn’t anyone checked this yet? If not, I’ll file this under “a pet project I’ll probably not have time to take in the foreseeable future”
I guess any report must be considered on its own terms but I’ve been pretty down on this stuff as a category ever since I heard the Center for Strategic and International Studies was cheerleading the idea that there were WMDs in Iraq.
Opportunity for Austrians Article by Seána Glennon: “In the coming week, thousands of households across Austria will receive an invitation to participate in a citizens’ assembly with a unique goal: to determine how to spend the €25 million fortune of a 31-year-old heiress, Marlene Engelhorn, who believes that the system that allowed her to inherit such a vast sum of money (tax free) is deeply flawed.”
Are we in an Original Position regarding the interests of our descendants?
If you:
Had to make a decision about the basic structure of a society where your distantdescendants will live (in 200 or 2000 years), and
only care about their welfare, and
don’t know (almost) anything about who they will be, how many, how their society will be structured, etc.,
Then you are under some sort of veil of ignorance, in a situation quite similar to Rawls’s Original Position… with one major difference: it’s not an abstract thought experiment for ideal political theory.
What led me to this is that I suspect that the welfare of my descendants will likely depend more on the basic structure of their society than on any amount of resources I try to transfer to them – but I’m not sure about that: there are some examples of successful transfers of great wealth through many generations.
I’m not sure Rawls’s theory of justicewould follow from this, but it’s quite possible: when I have the welfare of a subset of unidentified individuals in the future in mind, I feel tempted to prefer that their society will abide by something like his two principles of justice. According to Harsanyi, it’s also tempting to prefer something like Average utilitarianism (which, in this context, converges to sum-utilitarianism, because we are abstracting away populational variations).
After thinking this, I didn’t see any major philosophical opinions changing in myself, but I was surprised that I never found any argument over this in the literature.
Maybe because it’s not such a good way of reasoning about future generations: there are more effective ways of improving future welfare than fostering political liberalism. But I guess this is the sort of reasoning we’d expect from something like a reciprocity-based theory of longtermism.
Two researchers at the RAND Corporation recently argued for a related idea. From our Future Matters summary:
Douglas Ligor and Luke Matthews’s Outer space and the veil of ignorance proposes a framework for thinking about space regulation. The authors credit John Rawls with an idea actually first developed by the utilitarian economist John Harsanyi: that to decide what rules should govern society, we must ask what each member would prefer if they ignored in advance their own position in it. The authors then note that, when it comes to space governance, humanity is currently behind a de facto veil of ignorance. As they write, “we still do not know who will shoulder the burden to clean up our space debris, or which nation or company will be the first to capitalize on mining extraterrestrial resources.” Since the passage of time will gradually lift this veil, and reveal which nations benefit from which rules, the authors argue that this is a unique time for the international community to agree on binding rules for space governance.
The T20 Brasil process will put forward policy recommendations to G20 officials involved in the Sherpa and Finance tracks in the form of a final communiqué and task forces recommendations.
To inform these documents, we are calling upon think tanks and research centres around the world – this invitation extends beyond G20 members – to build and/or reach out to their networks, share evidence, exchange ideas, and develop joint proposals for policy briefs. The latter should put forward clear policy proposals to support G20 in addressing global challenges.
Selection criteria
Policy briefs must be related to the 36 subtopics that have been selected based on (i) the suggestions received from more than 100 national and foreign think tanks and research centres that have already expressed their interest in engaging with the T20 Brasil process and activities and (ii) the three priorities spelt out by the G20 Brazil presidency. These subtopics are organised under the six Task Forces themes.
My research group is designing a course on Global Risks for academic students in Brazil. I am looking for syllabi and teaching materials that could help inspire us. Right now I am using the WEF report, the Global Challenges report, the Legal Topics in Effective Altruism |and taking a look at the more practical topics in teaching materials from GPI. But I would like to see something from CSER, maybe? Anyone has any tips?
Send me a DM if you’re interested, I’d be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources
Experts from the finance sector, academia, and civil society worldwide are invited to review and provide feedback on the Draft Financial Institutions Net-Zero (FINZ) Standard. The public consultation survey will be open until September 30.
The primary aims of this consultation survey are to gather input from external stakeholders on the FINZ Standard—Consultation Draft v0.1, with particular focus on:
The clarity
Specific approaches to:
Evidencing entity-level commitments and leadership
Determining and identifying exposure and portfolio emissions
Portfolio climate alignment target
Emissions-intensive sector targets
Reporting
The SBTi’s direction of travel regarding financial institutions
Areas of support and improvement
Complete the survey now and contribute to the development of this essential standard.
The SBTi will also host three in-person workshops at Climate Week NYC and in London to gather expert insight on two important topics: neutralization and net-zero finance.
The workshops will be held on the following dates:
The role of carbon dioxide-removal in corporate net-zero transitions: New York City, September 23 | 3:00-6:30pm ET
Financial Institutions Net-Zero Standard Consultation: New York City, September 25 | 3:30-5:30pm ET
The role of carbon dioxide-removal in corporate net-zero transitions: London, October 8 | 2:30-6:00pm BST
Experts in each field are invited to participate by registering their interest. The precise locations will be shared with selected attendees. Register your interest here.
Scope 3 Discussion Paper feedback form
The SBTi is in the process of revising its Corporate Net-Zero Standard and one of the channels through which stakeholders are encouraged to engage is via the Scope 3 Discussion Paper feedback form. The Scope 3 Discussion Paper outlines the SBTi’s initial thinking on potential changes being considered for scope 3 target setting, including key principles and concepts.
In case anyone is interested, Peter Turchin will show up on Monday in a study group I joined
The Sciences of Ethics and Political Philosophy Reading Group
Disentangling the evolutionary drivers of social complexity: A comprehensive test of hypotheses
Peter Turchin
Monday, November 13 2 PM [WET/UTC] Online
In this session, the group will discuss the paper by Peter Turchin et al. (2022), “Disentangling the evolutionary drivers of social complexity: A comprehensive test of hypotheses” (Science Advances, 8(25). DOI: 10.1126/sciadv.abn3517). Session with the confirmed presence of Peter Turchin.
Anyone interested in participating can send an email to Filipe Faria: filipefaria@fcsh.unl.pt.
The Sciences of Ethics and Political Philosophy Reading Group is an international monthly-assembling online reading group co-hosted by the CFCUL and the Ethics and Political Philosophy Lab (EPLab) of the IFILNOVA. More information about the group here.
The famous passage from Bhagavad Gita (BG), the Hindu religious epic. It suggests that Nolan is associating Oppie with the terrible form of Vishvaruppa – call this the “promethean” interpretation. But Oppie is actually more similar to prince Arjuna: the hero with a crisis of conscience who doesn’t want to join the battlefield of Kurukshetra because it will bring incontrollable destruction—but ends up doing it anyway, because that’s his destiny, as explained by Krishna / Vishnu, the “destroyer of worlds”. This “fatalistic” interpretation is reinforced by other scenes – e.g., Oppie’s visions of destruction, and a conversation where President Truman basically tells Oppie that he’s not that important…
Enrico Fermi, one of the brightest among so many geniuses on screen, doesn’t have enough screen time to state his famous paradox. Given that the universe is 13.7 bi years old, and that there are so many stars in the galaxy, and certainly many of them are able to evolve intelligent life just like ours… where’s everyone? certainly, we should be seeing evidence of alien life somewhere by now—like radio waves, space structures, or a party invitation. So, why this silence? Are aliens avoiding us?
One of the main explanations is that life might be self-defeating: as technology progress, the capacity of a species to destroy itself increases faster than the capacity to mitigate this risk.
So, ok, this movie is astonishing… but dear Chris Nolan, if you ever consider to extend it or turn it into a series… there are many things you might want to do. But two short scenes explaining for the viewer (1) the BG’s quotation, and (2) Fermi’s paradox would greatly improve the understanding of one the tenets of the movie—Oppie’s concern that they may start an unstoppable “chain reaction that’ll consume the world”
Global Ultra High Net Worth Individuals fell by 6% this year, according to Wealth-X—after steady increases in the last few years. Thus, I’m afraid the lack of funding from SBF may be the beginning of a trend—at least for community building and longtermism
Essay Prize of the Portuguese Philosophy Society—Philosophical papers on Artificial intelligence
I’m not sure this will interest top researchers in AI philosophy, but maybe someone might see this as a low-hanging fruit:
the “PRÉMIO DE ENSAIO DA SOCIEDADE PORTUGUESA DE FILOSOFIA” of this year is about the challenges AI poses for “the philosophical understanding of the human”.
Contigent conventions and the Tragedy of “Happy Birthday lock-in”
Will and Rob were talking about how the idea that there’s an inevitable convergence in moral values is wrong, and they mention some examples of contingencies. The first is the “Tragedy of ‘Happy Birthday’ lock-in”:
The melody for “Happy Birthday” is really atrocious. It’s like a dirge. It has this really large interval; no one can sing it. I can’t sing it. And so really, if you were going to pick a tune to be the most widely sung song in the world, it wouldn’t be that tune. Yet, it’s the one that took off, and it’s now in almost all major languages.
[Nuka zaria: change human trajectory by singing a different song for birthdays. My current suggestion is Weird Al Yankovic’s Happy Birthday, but maybe something for optimistic and simple would be nice, too.
(on the other hand, my new s-risk is that we shift to this other attractor that haunts Brazilian birthday parties)]
Their second example is neckties:
It’s such a bizarre item of clothing. There’s obviously no reason why wearing a bit of cloth around your neck would indicate status or prestige. We could have had different indicators of that. Yet that is, as a matter of fact, the thing that took off.
(Nuka zaria: let’s all shift to wearing bandanas or gaucho scarves, which are more convenient and useful.)
But then, Will says something that bothers me: “There’s just this fundamental arbitrariness.” One interpretation of this sentence is true: you can’t predict in advance what sort of fancy item of clothing an elite will adopt, or what melody will be the most executed in the world; i.e., it’s hard to predict what precise convention will emerge. But it’s certainly not true that you can’t explain them in hindsight (ok, hindsight is 20⁄20). Moreover, and here I take some risk, I think it’s not true that one can’t, in advance, identify features of what convention will be adopted – i.e., what counts as a salient point of attraction is not random.
The Hill sisters (a kindergarten principal and a composer) developed the song “Good morning to all” (a predecessor to “Happy Birthday to you”) as something that children would find easy to sing – i.e., they optimized for simplicity and used good old trial and error. Thus, it probably became so popular precisely because kids loved it, and because nowadays birthday parties are made for kids to feel special. That’s not what I would call “arbitrary”—it’s certainly not random.
I think the explanation for neckties is a bit different. The French liked the knotted neckerchiefs of Croatian mercenaries, and as Louis XIII and Louis XIV started wearing lace cravats, the nobility followed through. So what began as a useful piece of cloth to close your jacket ended up being copied by a foreign elite because it was seen as a fancy ornament; and then it evolved to more and more complex designs precisely to signal its decorative function, so distinguishing the upper class from the commoner. Again, I wouldn’t call it random.
This, I guess, is good news: arbitrariness is not so pervasive that we cannot consciously influence the future.
Might”A Beacon in the Galaxy” be our new “Three-body problem” (by Cixin Liu)
This paper proposes to transmit an “updated Arecibo-like” message to a star cluster near the galaxy’s center, a “selected region of the Milky Way which has been proposed as the most likely for life to have developed”. Caleb Schwarf summarizes the issue here. Even if we set aside the possibility of conflict, maybe discussions on Space Governance should include how we might communicate with other types of intelligent life, like “at least don’t mention that we kill animals”.
I am inclined to answer “no,” because I’ve seen the subject pop in some discussions on economics in this Forum… on the other hand, I’ve also seen some EAs disregard matters of economic distribution as secondary—if not an obstacle to economic progress. I remember seeing this subject figure in some critiques to the movement or mentioned en passant when the subject is billionaires’ philanthropy. Anyway, I’d like to document and share here some of my impressions resulting from a 30min search on the subject.
My attention was recently drawn to the matter thanks to this survey showing a consensus in IGM Forum (from *before* the pandemic – though the results were just released this week) that inequality of income and wealth is a danger to capitalism and to democracy. It fits CORE’s survey among students on “what is the most pressing problem economists should address”. Though it is evidence of the importance of the matter, it also suggests that it’s not neglected.
Of course, inequality is particularly relevant to studying and fighting poverty—as shown in this post from GWWC’ Hazell and Holmes. However, the subject probably impacts the trajectory of our societies, as this kind of neglected GPI working paper / forum post argues that “we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term” […] “because greater inequality could increase existential risk”.
Update: new IGM forum survey shows that less than 10% of the consulted economists disagree, and most of them agree, that the increasing share of income and wealth among the richest people in a number of advanced countries is: a) giving significantly more political power to the wealthy (90% - weighted by confidence);
b) having a significantly negative effect on intergenerational social mobility (79%); c) a major threat to capitalism (61%).
Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals?
(Epistemic status: low, but I didin’t find any research on that, so the hypothesis deserves a bit more of attention)
1. Nosocomial infections are serious business. Hospitals are basically big buildings full of dying people and the stressed personel who goes from one bed to another try to avoid it. Throw a deadly and very contagious virus in it, and it becomes a slaughterhouse.
2. Previous coronavirus were rapidly spread in hospitals and other care units. That made South East Asia kinda prepared for possibly similar epidemics (maybe I’m wrong, but in news their medical staff is always in Hazmat suits, unlike most Health workers in the West). Maybe this is a neglected point in the successful approach in South East Asia?
3. I know hospitals have serious protocols to avoid it… but it takes only a few careless cleaning staff, or a patient’s relatives going to cafeteria, or a badly designed airflow, to ruin everything. Just one Hospital chain in Brazil concentrates most of deaths in Sao Paulo, and 40% of the total national.
Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight—yet, I didn’t even mention it above. Some countries report almost half of the deaths from those environments.
(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it’s implausible, but has anyone tested if this isn’t just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)
IMF climate change challenge
“How might we integrate climate change into economic analysis to promote green policies?
To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change.”
https://lnkd.in/dCbZX-B
Could we have catastrophic risk insurance?
Mati Roy once suggested, in this shortform, that we could have “nuclear war insurance,” a mutual guarantee to cover losses due to nukes, to deter nations from a first strike; I dismissed the idea because, in this case, it’d not be an effective deterrent (if you have power and reasons enough to nuke someone, insurance costs won’t be among your relevant concerns).
However, I wonder if this could be extrapolated to other C-risks, such as climate change—something insurance and financial markets are already trying to price. Particularly for C-risks that are not equally distributed (eg., climate change will probably be worse for poor tropical countries) and that are subject to great uncertainty…
I mean, of course I don’t expect countries would willingly cover losses in case of something akin to societal collapse, but, given the level of uncertainty, this could still foster more cooperation, as it’d internalize and dillute future costs through all participant countries… on the other hand, ofc, any form insurance implies moral hazard, etc. But even this has a bright side, as it’d provide a legit case for having some kind of governance/supervision /enforcement on the subject… I guess I might be asking: Why don’t we have a “climate Bretton Woods?”
(I guess you could apply the argument for FHI’s Windfall Clause here—it’s just that they’re concerned with benefits and companies, I’m worried about risks and countries)
Even if that’s not workable for climate change, would it work with other risks? E.g., epidemics?
(I think I should have done a better research on this… I guess either I am underestimating moral hazards and the problem of making countries cooperate, or there’s a huge flaw in my reasoning here)
I no longer endorse this comment because, since then, I found out that there’s a lot of research on internalising climate change externalities—and that Weitzman (2012) and others present mitigation as akin to insurance. I still wonder how much of this line of reasoning could extrapolate to other GCR.
It turns out that I changed my mind again. I don’t see why we couldn’t establish pigouvian taxes for (some?) c-risks. For instance, taxing nuclear weapons (or their inputs, such as nuclear fuel) according to some tentative guesstimate of the “social cost of nukes” would provide funding for peace efforts and possibly even be in the best interest of (most of?) current nuclear powers, as it would help slow down nuclear proliferation. This is similar to Barratt et al.’s paper on making gian of function researchers buy insurance.
Is there anything like a public repository / document listing articles and discussions on social discount rates (similar to what we have for iidm)?
(I mean, I have downloaded a lot of papers on this—Stern, Nordhaus, Greaves, Weitzman, Posner etc. - and there many lit reviews, but I wonder if someone is already approaching it in a more organized way)
Future of Life Institute is looking for translators! (Forwarded from FLI’s Newsletter) The outreach team is now recruiting Spanish and Portuguese speakers for translation work! The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet. We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term. https://lnkd.in/d5YqX-h For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.
Not super-effective, but given Sanjay’s post on ESG, maybe there are people interested: Ethics and Trust in Finance 8th Global Prize The Prize is a project of the Observatoire de la Finance (Geneva), a non-profit foundation, working since 1996 on the relationship between the ethos of financial activities and its impact on society. The Observatoire aims to raise awareness of the need to pursue the common good through reconciling the good of persons, organizations, and community. [...] The 8th edition (2020-2021) of the Prize was officially launched on 2 June 2020. The deadline for submissions is 31 May 2021. The Prize is open to people under the age of 35 working in or studying finance. Register here for entry into the competition. All essays submitted to the Prize are assessed by the Jury, comprising academics and professional experts.
Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?
If wild animals have bad lives on net, then indiscriminately increasing wild animal populations is bad under any plausible theory of population ethics.
Obviously. But then, first, Effective Environmentalists are doing great harm, right? We should be arguing more about it.
On the other hand, if your basic welfare theory is hedonistic (at least for animals), then one good long life compensates for thousands of short miserable ones—because what matters is qualia, not individuals. And though I don’t deny animals suffer all the time, I guess their “default welfare setting” must be positive if their reward system (at least for vertebrates) is to function properly.
So I guess it’s more likely that we have some sort of instance of the “repugnant conclusion” here.
Ofc, this doesn’t imply we shouldn’t intervene on wild environments to reduce suffering or increase happiness. What is at stake is: U(destroying habitats) > U(restoring habitats)
This is something interesting that I’ve been thinking about too, as someone who identifies as an environmentalist and who cares about animals. I would say most mainstream environmentalists promote rewilding but it’s not that common with Effective Environmentalism from what I’ve seen so far. You might say it gets lumped in with afforestation but that isn’t exactly rewilding nor that popular within EE anyway. Certainly the issue of more wild animal suffering is one I’ve raised when talking to less-EA aligned folks about rewilding and that’s not gone down well but I haven’t seen it discussed much in EE spaces.
Good point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good—even if it’s not immediately effective.
Does your feeling that the default state is positive also apply to farm animals? Their reward system would be shaped by aritifical selection for the past few generations, but it is not immediately clear to me if you think that would make a difference.
First, it’s not a feeling, it’s a hypothesis. Please, do not mistake one for the other.
It could apply to them if they were not observed to be under stress conditions and captivity, and in behaviors consistent with psychological suffering—like neurotic ticks, vocalization or apathy.
(Tbh, I don’t quite see your point here, but I guess you possibly don’t see mine, either)
But looking beyond the immediate aftermath, the risk of social unrest spikes in the longer term. Using information on the types of unrest, the IMF staff study focuses on the form that unrest typically takes after an epidemic. This analysis shows that, over time, the risk of riots and anti-government demonstrations rises. Furthermore, the study finds evidence of heightened risk of a major government crisis—an event that threatens to bring down the government and that typically occurs in the two years following a severe epidemic.
If history is a predictor, unrest may reemerge as the pandemic eases. The threats may be bigger where the crisis exposes or exacerbates pre-existing problems such as a lack of trust in institutions, poor governance, poverty, or inequality.
‘Good’ news: as expected, as real interest rates fall, so do SDR, increasing the social cost of carbon. (not novelty, ok, but monetary policy-makers explicitly acknowledging it seems to be good) Bad news: of course, it still seems to be higher than a normative SDR based on time-neutrality.
Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of “AI DAO”—i.e., from creating a legal entity totally implemented by an autonomous system. This doesn’t seem reasonable; after all, what is company if not some sort of artificial agent?
Does anyone know or have a serious opinion / analysis on the European campaign to tax meat? I read some news at Le Monde, but nothing EA-level seriousness. I mean, it seems a pretty good idea, but I saw no data on possible impact, probability of adoption, possible ways to contribute, or even possible side-effects?
(not the best comparison, but worth noting: in Brazil a surge in meat prices caused an inflation peak in december and corroded the governement’s support—yeah, people can tolerate politicians meddling with criminals and fascism, as long as they can have barbecue)
I was reading this recommended book and wondering how much of the late changes in our world is due to the demographic transitions—i.e., boomers. We know the population pyramid shape affects unemployment rates, wealth concentration (morever, think about how income predicts life expectancy, at least in very unequal countries—so one can expect a higher proportion of wealthier individuals in old age), and maybe even increasing health costs and votes—e.g., I just confirmed that, in Brazil, opinions about the government among young and old people are symmetrically opposite.
Idk what to infer from here. It seems to me there’s an elephant in the room: I read a lot about economics, philosophy and politics, and I’ve seen almost no mention of it but for discussions over one of those topics alone—never something concerning all of them. But I do think this should interest EAs, because much of our economic and political theory fails to account for an aging population—something quite remarkable in human history. So, I’d appreciate any tip to read something that takes demography seriously (except for Peter Turchin, whom I already follow).
So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?
Obelus seems to be the organizational name under which Asterisk is registered—both the asterisk and the obelus are punctuation symbols so I highly doubt that Obelus exists separately from Asterisk.
Charging readers is probably an attempt to be financially independent of EV, which is a worthy goal for all EA organizations and especially media organizations that may have good cause to criticize EV at some point.
The eggs and milk quip is just a quip about their new prices; I don’t understand what’s offensive about it.
The California issue is weird to me too.
[Conflict note: writing an article for Asterisk now]
The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.
FWIW EV has been off-boarding its projects, so it isn’t surprising that Asterisk is now nested under something else. I don’t know anything about Obelus Inc.
Like Karthik, I don’t really understand what is so terrible about this, but I agree that the California edition is at least strange. It’s interesting how many of the ideas central to EA originate from California. While exploring the origin stories of these ideas is intriguing, I would be much more interested in an issue that explores ideas from far outside that comfort zone and see what they can teach us.
However, I’m not an editor and don’t think I’d make a good one either 😅
FWD: Invitation to the Future Generations Initiative Launch Event
We are delighted to extend an official invitation to you for the official launch of the Future Generations Initiative, which will take place on February 21, from 16.00 to 18.00 at Atelier 29 - Rue Jacques de Lalaing 29, 1000 Brussels, and will also be streamed online.
To confirm your attendance, kindly fill out this short registration form by Monday, February 19, at 18.00 CET.
There is an urgent need for the EU to embed the rights of Future Generations in its decision-making processes. However, a model for such representation is currently lacking.
A diverse group of NGOs is working together to convince decision makers that the time to act for Future Generations is now. On February 21, we will launch our coalition to promote this important issue as we approach the EU elections and the next political cycle begins.
You can find the event agenda by following this link.
By completing the registration form, you have the option to attend either in person or virtually.
The event will feature the presentation of the Future Generations proposal and policy demands, and a reception will follow.
Please do not hesitate to reach out to marco@thegoodlobby.eu should you have any questions.
Maybe I didn’t understand it properly, but I guess there’s something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it’s 32.
Depending on your population ethics, one may think “we should improve the prospects in poor countries”, but others can say “we should have more chimps.”
Or this scale has serious problems for comparisons between different species.
Hey Ramiro and Thomas,
Thanks for your engagement with this system. I think in general our system has lots of room for improvement—we are in fact working on refining it right now. However, I am pretty strongly in favor of having evaluation systems even if the numbers are not based on all the data we would like them to be or even if they come to surprising results.
Cross species comparison is of course very complex when it comes to welfare. Some factors are fairly easy to measure across species (such as death rates) while others are much more difficult (diseases rates are a good example of where it’s hard to find good data for wild animals). I can imagine researchers coming to different conclusions given the same initial data.
It’s worth underlining that our system does not aim to evaluate the moral weight of a given species, but merely to assess a plausible state of welfare. (Thomas: this would be one caveat to add when sharing.) In regards to moral weight (e.g. what moral weight do we accord a honey bee relative to a chicken etc.) – that is not really covered by our system. We included the estimates of probability of consciousness per Open Phil’s and Rethink Priorities’ reports on the subject, but the moral weight of conscious human and non-human animals is a heavily debated topic that the system does not go into. Generally I recommend Rethink Priorities’ work on the subject.
In regards to welfare, I think it’s conceptually possible that e.g. a well treated pet dog in a happy family may be happier and their life more positive than a prisoner in a North Korean concentration camp. This may seem unintuitive, but I also find the inverse conclusion unintuitive. As mentioned above, that doesn’t mean that we should be prioritizing our efforts on improving the welfare of pet dogs vs. humans in North Korea. Prioritizing between different species is a complex issue, of which welfare comparisons like this index may form one facet without being the only tool we use.
To cover some of the specific claims.
- Generally, I think there is some confusion here between the species having control vs the individual. For example, North Korea as a country has a very high level of control over their environment, and can shape it dramatically more than a tribe of chimps can. However, each individual in North Korea has extremely limited personal control over their life – often having less free time and less scope for action than a wild chimp would practically (due to the constraints of the political regime) if not theoretically (given humanity’s capabilities as a species).
- We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
- Humanity has indeed spent a great deal more on diagnosing humans than chimps. However, there is some data on health that is comparable, particularly when it comes to issues that are clearer to observe such as physical disability.
- There is in fact some research on hunger and malnutrition in wild chimps, so this was not based on intuitions but on best estimates of primatologists. Malnourishment in chimps can be measured in some similar ways to human malnourishment, e.g. stunting of growth. I do think you’re right that concerns with unsafe drinking water could be factored into the disease category instead of the thirst one.
I would be keen for more research to be done on this topic but I would expect it to take a few hours of research into chimp welfare and a decent amount of research into human welfare to get a stronger sense than our reports currently offer. I think these sorts of issues are worth thinking about and we would like to see more research being done using such a system that aims to evaluate and compare the welfare of different species. Thank you again for engaging with the system—we’ll bear your comments in mind as we work on improvements.
Thanks for this clarifying comment. I see your point—and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa—at least at the same time.
Really, I’m more puzzled than anything else—and also surprised that I haven’t seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn’t change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841) and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)
Finally, really, please, don’t take this as a criticism (I’m a major fan of CE), but:
First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it’d be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn’t know how to compare the hypothetical welfare of the world’s super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
Someone I know also noticed this a couple of months ago, so I looked into the methodology and found some possible issues. I emailed Joey Savoie, one of the authors of the report; he hasn’t responded yet. Here’s the email I sent him:
Thanks. I’m glad to see I wasn’t profoundly misunderstanding it. Now, I think this is a very important issue: either there’s something really wrong with Charity Entreneurship assessment of welfare in different species, or I will really have to rethink my priorities ;)
When you post a chart like this, I recommend linking to the source. Thomas linked to a blog post below, but this was also posted on the Forum. The initial comment touches on your concern, but I don’t think explains CE’s beliefs fully.
True, thanks.
I inserted a link to the CE’s webpage on the Weighted Factor Model
Daron Acemoglu interesting review of Ord’s The Precipice: https://www.project-syndicate.org/onpoint/how-to-think-about-existential-and-immediate-risks-by-daron-acemoglu-2021-05
Just sharing some concerns about live exports (yeah, the transportation of living animals in ships)
I wonder if we could do more about live exports. I would like to know if it’s worse than some other practices in factory farming that we often highlight (like caging hens), but it seems more susceptible of getting support from meat-eaters who consider it cruel and unnecessary. I know the subject has been mentioned en passant in some Forum posts and it’s a subject that may figure in European reforms…
I’m particularly concerned with Brazil, since it’s such a large exporter—but the same applies to Australia, too. At least two organizations (Fórum Nacional de Proteção e Defesa Animal—FNDPA , and Mercy for Animals) working with legal measures to ban the practice in Brazil have received support from EA—btw, one can sign a petition on MfA’s website. But my (perfunctory) knowledge of Brazilian politics and law makes skeptic that this could work without external pressure.
So it’s on!
The Effective Thesis Exceptional Research Award (that’s how the website calls it), or High-Potential Award (that’s how it shows up on Google), or maybe just Award (how apparently everyone calls it) is open to submissions up to Sep 2022.
(I’m pretty sure there’s a top post coming, but I thought it’d be cool to mention it in shortform right away. Feels like a scoop)
Shouldn’t we have more EA editors in Philpapers categories?
Philpapers is this huge index/community of academic philosophers and texts. It’s a good place to start researching a topic. Part of the work is done by voluntary editors and assistants, who assume the responsibility of categorizing and including relevant bibliography; in exchange, they are constantly in touch with the corresponding subject. Some EAs are responsible for their corresponding fields; however, I noticed that some relevant EA-related categories currently have no editor (e.g.: Impact of Artificial Intelligence). I wonder: wouldn’t it be useful if EAs assumed these positions?
I’m not familiar with academic philosophy/how Philpapers is typically used. Can you say more about what you’d expect the positive outcome(s) to be if EAs volunteer to help out? I can imagine that this might improve the quality of papers on EA-adjacent topics, but your mention of volunteers always being up-to-date on the literature makes me wonder if you’re also thinking of beneficial learning for the volunteers themselves.
I’m thinking on both: adequately categorizing papers may have an indirect impact on how other scholars select their bibliographical references; and the volunteer editors themselves may acquire (or anticipate its acquisition—I suppose that, if a paper is really good, you’ll likely end up finding it anyway) knowledge of their corresponding domains.
Of course, perhaps the answer is “it’s already hard enough to catch up with the posts on such-and-such subjects in the EA and rationalist community, and read the standard literature, and do original work, etc. - and you still want me to work as a quasi-librarian for free?”
This suggestion is worth posting in other places. You could consider emailing places like Forethought or FHI that have a lot of philosophers, or posting in FB groups like “EA Fundamental Research” or “EA Volunteering”.
Too bad I don’t have a Facebook account anymore… I’d appreciate if someone else (whou found it useful, of course) could raise this subject in those groups.
(man, do I miss the memes!)
Or I could just post it as a Question in this forum, to get more visibility.
Thanks.
Why don’t we have more advices / mentions about donating through a last will—like Effective Legacy? Is it too obvious? Or absurd?
All other cases of someone discussing charity & wills were about the dilemma “give now vs. (invest) post mortem”. But we can expect that even GWWC pledgers save something for retirement or emergency; so why not to legate a part of it to the most effective charities, too? Besides, this may attract non-pledgers equally: even if you’re not willing to sacrifice a portion of your consumption for the sake of the greater good, why not those savings for retirement, in case you die before spending it all?
Of course, I’m not saying this would be super-effective; but it might be a low-hanging fruit. Has anyone explored this “path”?
I agree with you that this is an important area. I wrote a whole essay on the technical aspects of planned giving. https://medium.com/@aaronhamlin/planned-giving-for-everyone-15b9baf88632
I have some more related essays here: https://www.aaronhamlin.com/articles/#philanthropy
Thanks. Your post strengthened my conviction that EAs should think about the subject—of course, the optimal strategy may vary a lot according to one’s age, wealth, country, personal plans, etc.
But I still wonder: a) would similar arguments convince non-EA people? b) why don’t EA (even pledgers) do something like that (i.e., take their deaths into account)? Or If they do it “discretely”, why don’t they talk about it? (I know most people don’t think too much about what is gonna happen if they die, but EAs are kinda different)
(I greatly admire your work, btw)
I’m aware of many people in EA who have done some amount of legacy planning. Ideally, the number would be “100%”, but this sort of thing does take time which might not be worthwhile for many people in the community given their levels of health and wealth.
I used this Charity Science page to put together a will, which I’ve left in the care of my spouse (though my parents are also signatories).
Why don’t we have an “Effective App”?
See, e.g., Ribon—an app that gives you points (“ribons”) for reading positive news (e.g. “handicapped walks again thanks to exoskeleton”) sponsored by corporations; then you choose one of the TLYCS charities, and your points are converted into a donation.
Ribon is a Brazilian for-profit; they claim to donate 70% of what they receive from sponsors, but I haven’t found precise stats. It has skyrocketed this year: from their informed impact, I estimate they have donated about U$ 33k to TLYCS – which is a lot for Brazilian standards. They intend to expand (they gathered more than R$ 1 mi – roughly U$250k—from investors this year) and will soon launch an ICO. Perhaps an EA non-profit could do even more good?
I’d never heard of this app before—thanks for bringing it to my attention!
The most prominent “EA donation” app I’m aware of is Momentum, which has multiple full-time employees and seems to be pushing hard to get American users. I don’t know what their user acquisition numbers are like thus far.
I love Momentum—to me, it’s like a kind of cosmic pigouvian tax (“someone has to pay when Trump tweets, and this time it’s gonna be me”); it still demands some kind of committment, though. Ribon is completely different, it’s not an app that only altruistic people use; actually, that’s why I didn’t really like it at first, because it didn’t ask people to give anything or to be effective… but then, perhaps that’s why it scales well—particularly in societies without an altruistic culture. It’s a low-hanging fruit: we already see lots of ads on the internet, for free, and usually (most) don’t read but the headlines of news like “Shelly-Ann breaks a new record”… so why not game it all a little bit (you have points, can gain “badges”, compete with your friends...) and make companies pay for your attention (ads) in donations?
The Life You Can Save is working with an app-development company called Meepo (which is doing pro bono work) to build a non-profit donation app, which is currently in beta. You can learn more about this project, and how to download the beta version, here.
Is there anything like EA Consulting for charities?
I mean, we do have:
(a) meta-charities (e.g., GW, SoGive...) which evaluate projects and organizations;
(b) charity incubators (Charity Entrepreneurship...), which select and incubate ideas for new EA projects;
(c) recommended charities that provide consulting services for policy-makers, such as Innovation in Government Initiative;
(d) some EAs working in consulting firms (EACN) - which, among other things, aim to nudge corporations and co-workers into more effective behavior.
But I didn’t find any org providing to non-EA charities consulting services aiming to make them more effective. Would it be low-impact? Or is it a low-hanging fruit?
One might think that this is basically the same job GW already does… Well, yeah, I suppose you would actually use a similar approach to evaluate impact, but it’s very different to provide to a charity recommendations that aim to help them achieve their own goals. This would be framed as assistance, not as some sort of examination; while GW’s stakeholders are donors, this “consulting charity” would work for the charities themselves. Besides, in order to prevent conflicts of interest, corporations often use different firms to provide them auditting (which would be akin to charity evaluation—i.e., a service that ultimately is concerned with investores) and consulting services (which is provided to the corporation and its managers).
This could be particularly useful for charities in regions that lack a (effective) charity culture.
Update: an example of this idea is the Philanthropy Advisory Fellowship sponsored by EA Harvard—which has, e.g., made recommendations to Arymax Foundation on the best cause areas to invest in Brazil. But I believe an “EA Consulting” org would provide other services, and not only to funders.
You may have a look at https://docs.google.com/document/d/166I2puwCZl_GUhohq0JVZWJKABT9fbcIr2nwDRTIam4/edit
Thanks, I didn’t know Algosphere. Btw, I saw there are two allies in Sao Paulo. I’d like to get in touch with them, if that’s a possibility ;)
Yes, Ramiro, you may write to me at daoust514@gmail.com and I will transmit your demand to them.
I was thinking about Chad current situation and Vox’s piece on Parliamentarism… Has anyone assessed (ITN or EA-style evaluation) political stability as a cause area?
I mean, it’s pretty relevant for peace (I guess most wars result from conflict of factions or succession crises) and for a well functioning government. People talk about the dangers of polarization, about why nations fail, or autoritarianism, or iidm… It’s not neglected per se (it’s been the focus of some of classical works in political phil & sci), but I’m not sure all low-hanging has been eaten; plus, thinking about interventions as increasing / decreasing political stability might help assessing other areas (like IIDM).
I was thinking about Urukagina, the first monarch ever mentioned for his benevolence instead of military prowess. Are there any common traces among them? Should we write something like that Forum post on dark trait rulers—but with opposite sign? I googled a bit about benevolent kings (I thought it’d provide more insight than looking to XXth century biographies), but, except maybe for enlightened despots, most of the guys (like Suleiman, the magnificent) in these lists are conquerors who just weren’t brutal and were kind law-givers to their people—which you could also say about Napoleon. I was thinking more about guys like Ashoka and Marcus Aurelius, who seem to have despised the hunger for conquests in other people and were actually willing to improve human welfare for moral reasons
An objection to the non-identity problem: shouldn’t disregarding the welfare of non-existent people preclude most interventions on child mortality and education?
One objection against favoring the long-term future is that we don’t have duties towards people who still don’t exist. However, I believe that, when someone presents a claim like that, probably what they want to state is that we should discount future benefits (for some reason), or that we don’t have a duty towards people who will only exist in the far future. But it turns out that such a claim apparently proves too much; it proves that, for instance, we have no obligation to invest on reducing the mortality of infants less than one year old in the next two years
The most effective interventions in saving lives often do so by saving young children. Now, imagine you deploy an intervention similar to those of Against Malaria Foundation—i.e., distributing bednets to reduce contagion. At the beggining, you spend months studying, then preparing, then you go to the field and distribute bednets, and then one or two years later you evaluate how many malaria cases were prevented in comparison to a baseline. It turns out that most cases of averted deaths (and disabilities and years of life gained) correspond to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform today, they will only succeed in enacting it in some years—thus we can expect that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of their positive impact will affect people who are not born yet. If there’s any value in positivel influencing these children, most of it will happen to people who are not yet born)
This means that, at the beggining of this project, most of the impact corresponded to people who didn’t exist yet—so you were under no moral obligation to help them.
It’s also a significant problem for near-term animal welfare work, since the lifespan of broiler chickens is so short, almost certainly any possible current action will only benefit future chickens.
https://www.mopp-journal.org/go-to-main-page/calls-for-papers/ A call for papers on longermism on the Moral philosophy and politics journal
Should donations be counter-cyclical? At least as a “matter of when” (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don’t think patient philanthropists should “give now instead of later” just because of that (we’ll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?
One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.
I don’t think this is a big concern. When people say “timing the market” they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)
Can Longtermists “profit” from short-term bias?
We often think about human short-term bias (and the associated hyperbolic discount) and the uncertainty of the future as (among the) long-termism’s main drawbacks; i.e., people won’t think about policies concerning the future because they can’t appreciate or compute their value. However, those features may actually provide some advantages, too – by evoking something analogous to the effect of the veil of ignorance:
They allow long-termism to provide some sort of focal point where people with different allegiances may converge; i.e., being left- or right-wing inclined (probably) does not affect the importance someone assigns to existential risk – though it may influence the trade-off with other values (think about how risk mitigation may impact liberty and equality).
And (maybe there’s a correlation with the previous point) it may allow for disinterested reasoning – i.e., if someone is hyperbolically less self-interested in what will happen in 50 or 100 years, then they would not strongly oppose policies to be implemented in 50 or 100 years – as long as they don’t bear significant costs today.
I think (1) is quite likely acknowledged among EA thinkers, though I don’t recall it being explicitly stated; some may even reply “isn’t it obvious?”, but I don’t believe outsiders would immediately recognize it.
On the other hand, I’m confident (2) is either completely wrong, or not recognized by most people.If it’s true, we could use it to extract from people, in the present, conditional commitments to be enforced in the (relatively) long-term future; e.g., if present investors discount future returns hyperbolically, they wouldn’t oppose something like a Windfall Clause. Maybe Roy’s nuke insurance could benefit from this bias, too.
I wonder if this could be used for institutional design; for instance, creating or reforming organizations is often burdensome, because different interest groups compete to keep or expand their present influence and privileges – e.g., legislators will favor electoral reforms allowing them to be re-elected. Thus, if we could design arrangements to be enforced decades (how long?) after their adoption, without interfering with current status quo, we would eliminate a good deal of its opposition; the problem then subsumes to deciding what kind of arrangements would be useful to design this way, taking into account uncertainty, cluelessness, value shift…
Are there any examples of existing or proposed institutions that try to profit from this short-term vs. long-term bias in a similar way? Is there any research in this line I’m failing to follow? Is it worth a longer post?
(One possibility is that we can’t really do that—this bias is something to be fought, not something we can collectively profit from; so, assuming the hinge of history hypothesis is false, the best we can do is to “transfer resources” from the present to the future, as sovereign funds and patient philanthropy advocates already do)
Philosophers and economists seem to disagree about the marginalist/arbitrage argument that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there’s any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?
Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers’s recommendation on this three years ago: https://obamawhitehouse.archives.gov/sites/default/files/page/files/201701_cea_discounting_issue_brief.pdf
While the “risk-free” interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically any normal public project.
Assessing the impact of Brazilian donors and EA community
We’re thinking about testing if our actions for promoting EA in this year (translations, meetings, networking...) have led to an observable increase in donations from Brazil—particularly outside the group of more “engaged” members. Even if we haven’t observed an increase in high-quality engagement (such as GWWC pledges), we do see an increase in some “cheaper signals”, such as the number of Facebook group members and the amount of donations to AMF (which, curiously, are concentrated in basically two metropolitan areas—Sao Paulo and Porto Alegre; I know there are some EAs living in Minas and in the North, but currently I’m not aware of any donation coming from Rio and Brasilia, despite them being high-income metropolitan area). We’d like to test if that’s a coincidence.
I would appreciate any suggestion/help on that. I think it would demand more than EA survey data. First, we thought about requesting to EA charities data about the amount of donations:
1.1 from Brazil between Oct 23th, 2018 and Oct 23th, 2019 (controlling for month) with the amount of donations from the previous year;
1.2 from similar countries (I’m not sure which countries we should pick: Argentina, Chile, Mexico, S. Africa, Portugal?...China?), in the same periods – to check if any of them presented a similar increase/decrease.
Second, I wonder if we could get in touch with at least some identified donors and ask them how they came to the decision of donating. Possibly, tracking people using the names they provided to those websites might be considered too invasive, but I wonder if the organization itself could send an e-mail inviting them to get in touch with us.
Hi Ramiro.
I think that Point 1 will be difficult to test in this way. What you want to do sounds a bit like a regression discontinuity analysis, but (as I understand it) there isn’t really a sharp time point for when you started promoting EA more; the translations/meetings etc. increased steadily since Oct 2018, right? I think this will make it harder to see the effect during the first year that you are scaling up outreach (particularly if compared by month, as there is probably seasonal variation in both donation and outreach). Brazil has also had a fairly distinct set of news worthy events (i.e. election and major political change, arrest of two former presidents during ongoing corruption scandals, amazon fires, etc.) over the same time period you increased outreach. If these events influence donation behaviour, then comparisons to other countries might not be particularly relevant (and it further complicates your monthly comparison). I think a better way to try and observe a quantitative effect would be if you compare the total donations for three years: pre-Oct 2018, Oct 2018-Oct 2019, post-Oct 2019 (provided you keep your level of outreach similar for the next year, and are patient). Aggregating over year will remove the seasonal effect of donations and some of the effect of current events, and if this shows an increase for 2019-2020, then you could (cautiously) look at comparing the monthly donation behaviour (three years of data will be better to compensate for monthly variation).
At this point, I think tracking your impact more subjectively by using questionnaires and interviews would produce more useful information. Not sure if charities would link their donors to you (maybe getting the contact of Brazilians who report donating in the EA survey would be more likely), but you could also try adding a annual questionnaire link to your newsletter/facebook/site like 80,000 hours does. I’d specifically try to ask people who made their first donations, or who increased their donations, this year what motivated them to do so.
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit—maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge.
Is there anything like that already?
I was recently reading about the International Panel for Social Progress: https://www.ipsp.org/ I had never heard of it before. Which surprised me, since it’s kind of like the IPCC, but for social progress. I got the impression that it somehow failed—in reaching significant consensus, in influencing policy… but why?
I was Reading about Meghan Sullivan “principle of non-arbitrariness,” and it reminded me Parfit’s argument against subjectivist reasoning in On What Matters… but why are philosophers (well, and people in general) against arbitrariness? I mean, I do agree it’s a tempting intuition, but I’ve never seen (a) a formal enunciation of what counts as arbitrary (is “arbitrary” arbitrary?), and (b) an a priori argument against. Of course, if someone’s preference ordering varies totally randomly, we can’t represent them with a utility function, and perhaps we could accuse them of being inconsistent. But that’s not what philosophers’ examples usually chastise: if one has a predictable preference for eating shrimps only on Friday, or disregards pain only on Thursday, there’s no instability here – you can represent it with a utility function (having time as a dimension).
There isn’t even any a priori feature allowing us to say that is evolutionarily unstable, since this could only be assessed when we look at whom our agent will interact with. Which makes me think that arbitrariness is not a priori at all, of course – it depends on social practices such as “giving reasons” for actions and decisions (i don’t think Parfit would deny that; idk about Sullivan). There might be a thriving community of people who only love shrimp on Friday, for no reason at all; but, if you don’t share this abnormal preference, it might be hard to model their behavior, to cooperate with them—at least, in this example, when it comes gastronomic enterprises. On the other hand, if you can just have a story (even if kinda unbelievable: “it’s a psychosomatic allergy”) to explain this preference, it’s ok: you’re just another peculiar human. I can understand you now; your explanation works as a salience that allows me to better predict your behavior.
I suspect many philosophical (a priori-like) intuitions depend on things like Schelling points (i.e., the problem of finding salient solutions for social practices people can converge to) than most philosophers would admit. Of course, late Wittgenstein scholars are OK with that, since for them everything is about forms of life, language games, etc. But I think relativistic / conventionalist philosophers unduly trivialize this feature, and so neglect an important point: whatever counts as arbitrary is not, well, arbitrary – and we can often demonstrate that what we call “arbitrary” is suboptimal, inconsistent with other preferences or intuitions, or hard to communicate (and so a poor candidate for a social norm / convention / intuition).
The Global Catastrophic Risk Institute is looking for collaborators and advisees!
The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.
Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.
Find more details here!
Is it just me or is the landmass on that globe not to scale?
is that what you’re concerned with? I am trying to find out what’s this blue mist trail on the right. It looks like Earth’s become a comet
What I miss when I read about the morality of discounting is a disanalogy that explains why hyperbolic or exponential discount rates might be reasonable for individuals with limited lifespans and such and such opportunity costs, but not for intertemporal collective decision-making. Then we could understand why pure discount is tempting, and maybe even realize there’s something that temporal impartiality doesn’t capture. If there’s any literature about it, I’d like to know. Please, not the basic heuristics & bias stuff—I did my homework. For instance, if human welfare was something that could grow like compound interests, it’d make sense to talk about pure exponential discount. If you could guarantee that all of the dead in the battle of Marathon would have, in expectancy, added good to the overall happiness (or whatever you use as a goal function) in the world and transmitted it to their descendants, then you could say that those deaths are a greater evil than the millions of casualties in WW2; you could think of that welfare as “investment” instead of “consumption”. But that’s implausible. On the other hand, there’s a small grain of truth here: a tragedy happening in the past will reverberate longer in the world historical trajectory. That’s just causality + temporal asymmetry. This makes me think about cluelessness… I do have a tendency to think good facts have a tendency to lead to better consequences, in general; you don’t have to be an opmitist about it: bad facts just tend to lead to worse consequences, too. The opposite thesis, that a good/bad fact is as likely to cause good as evil, seems quite implausible. So you might be able to think about goodness as investment a little bit; instead of pure discount, maybe we should have something like a proxy for “relative impact in world trajectories”?
I just answered to UNESCO Public Online Consultation on the draft of a Recommendation on AI Ethics—it was longer and more complex than I thought.
I’d really love to know what other EA’s think of it. I’m very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it’s the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address “long-term catastrophic harms”), I haven’t seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.
Is ‘donations as gifts’ neglected?
I enjoy sending ‘donations as gifts’ - i.e., donating to GD, GW or AMF in honor of someone else (e.g., as a birthday gift). It doesn’t actually affect my overall budget for donations; but this way, I try to subtly nudge this person to consider doing the same with their friends, or maybe even becoming a regular donor.
I wonder if other EAs do that. Perhaps it seems very obvious (for some cultures where donations are common), but I haven’t seen any remark or analysis about it (well, maybe I’m just wasting my time: only one friend of mine stated he enjoyed his gift, but I don’t think he has ever done it himself), and many organizations don’t provide an accessible tool to do this.
P.S.: BTW, my birthday is on May 14th, so if anyone wants to send me one of these “gifts”, I’d rather have you donating to GCRI.
I don’t know what you mean by ‘neglected’. I know a lot of people who say they want this and a similar number who are deeply offended by the concept. (Personally, I’m against the idea of giving charitable donations to my favourite charity as a gift, although I’d consider a donation to the recipient’s favourite charity.)
Thanks. Maybe it’s just my blindspot. I couldn’t find anyone discussing this for more than 5min, except for this one. I googled it and found some blogs that are not about what I have in mind
I agree that donating to my favourite charity instead of my friend’s favorite one would be unpolite, at least; however, I was thinking about friends who are not EAs, or who don’t use to donate at all. It might be a better gift than a card or a lame souvenir, and perhaps interest this friend in EA charities (I try to think about which charity would interest this person most). Is there any reason against it?
If your friend doesn’t donate normally, then probably their preferred person to spend money on is themself. It still seems rude to me to say you’re giving them a gift, which should be something they want, and instead give them something they don’t want.
For example, my mother likes flowers. I normally get her flowers for mother’s day. If I switch to giving her a donation to AMF instead of buying her flowers, she will be counterfactually worse off—she is no longer getting the flowers she enjoys. I don’t think that kind of experience would make her more likely to start donating, either.
Did UNESCO draft recommendation on AI principles involve anyone concerned with AI safety? The draft hasn’t been leaked yet, and I didn’t see anything in EA community—maybe my bubble is too small. https://en.unesco.org/artificial-intelligence
Anyone else consders the case of Verein KlimaSeniorinnen Schweiz and Others v. Switzerland (application no. 53600⁄20) of the European Court of Human Rights a possibly useful for GCR litigation?
So, I saw Vox’s article on how air filters create huge educational gains; I’m particularly surprised that indoor air quality (actually, indoor environmental conditions) is kinda neglected everywhere (except, maybe, in dagerous jobs). But then I saw this (convincing) critique of the underlying paper.
It seems to me that this is a suitable case for a blind RCT: you could install fake air filters in order to control for placebo effects, etc. But then I googled a little bit… and I haven’t found significant studies using blind RCTs in social sciences and similar cases. I wonder why; at least for these cases, it doesn’t seem more unethical or harder to do it than in medical trials.
I was thinking about the EA criticism contest… did anyone submit something like “FTX”? Then give that person a prize! Forecaster of the year! And second place for the best entries talking about accountability and governance. If not… then maybe it’s interesting to highlight: all of those “critiques” didn’t foresee the main risks that materialized in the community this year. Maybe if we had framed it as a forecasting contest instead… and yet, we have many remarkable forecasters around, and apparently none of them suggested it was dangerous to place so much faith on one person. Or it’s just a matter of attention. So I ask: what is the most impactful negative event that will happen to EA community in 2023?
A criticism contest submission related to FTX was highlighted by a panelist, but did not win a prize: https://medium.com/@sven_rone/the-effective-altruism-movement-is-not-above-conflicts-of-interest-25f7125220a5
I spent SO much time trying to find this entry after the FTX news broke. It didn’t forecast FTX fraud, but it has still absolutely been elevated by recent events. You should re-up this on the forum to see if more people will engage with it now.
Posted: https://forum.effectivealtruism.org/posts/T85NxgeZTTZZpqBq2/the-effective-altruism-movement-is-not-above-conflicts-of
I agree.
I’m a bit wary of awarding a prize to any post mentioning FTX without regard to how accurate it is.
This post talking about the risks of FTX aged very well, although it wasn’t part of the contest. It was fairly ignored at the time, but I did agree with it and posted so in the comments.
I expect EA to get cautious around financial stuff for a while (hopefully), so another scandal would come from somewhere else. Perhaps a prominent figure will be exposed as an abuser of some kind?
How consistent are “global risk reports”?
We know that the track record of pundits is terrible, but many international consultancy firms have been publishing annual “global risks reports” like the WEF’s, where they list the main global risks (e.g. top 10) for a certain period (e.g., 2y). Well, I was wondering if someone has measured their consistency; I mean, I suppose that if you publish in 2018 a list of the top 10 risks for 2019 & 2020, you should expect many of the same risks to show up in your 2019 report (i.e., if you are a reliable predictor, risks in report y should appear in report y+1). Hasn’t anyone checked this yet?
If not, I’ll file this under “a pet project I’ll probably not have time to take in the foreseeable future”
I guess any report must be considered on its own terms but I’ve been pretty down on this stuff as a category ever since I heard the Center for Strategic and International Studies was cheerleading the idea that there were WMDs in Iraq.
Opportunity for Austrians
Article by Seána Glennon: “In the coming week, thousands of households across Austria will receive an invitation to participate in a citizens’ assembly with a unique goal: to determine how to spend the €25 million fortune of a 31-year-old heiress, Marlene Engelhorn, who believes that the system that allowed her to inherit such a vast sum of money (tax free) is deeply flawed.”
Are we in an Original Position regarding the interests of our descendants?
If you:
Had to make a decision about the basic structure of a society where your distant descendants will live (in 200 or 2000 years), and
only care about their welfare, and
don’t know (almost) anything about who they will be, how many, how their society will be structured, etc.,
Then you are under some sort of veil of ignorance, in a situation quite similar to Rawls’s Original Position… with one major difference: it’s not an abstract thought experiment for ideal political theory.
What led me to this is that I suspect that the welfare of my descendants will likely depend more on the basic structure of their society than on any amount of resources I try to transfer to them – but I’m not sure about that: there are some examples of successful transfers of great wealth through many generations.
I’m not sure Rawls’s theory of justice would follow from this, but it’s quite possible: when I have the welfare of a subset of unidentified individuals in the future in mind, I feel tempted to prefer that their society will abide by something like his two principles of justice. According to Harsanyi, it’s also tempting to prefer something like Average utilitarianism (which, in this context, converges to sum-utilitarianism, because we are abstracting away populational variations).
After thinking this, I didn’t see any major philosophical opinions changing in myself, but I was surprised that I never found any argument over this in the literature.
Maybe because it’s not such a good way of reasoning about future generations: there are more effective ways of improving future welfare than fostering political liberalism. But I guess this is the sort of reasoning we’d expect from something like a reciprocity-based theory of longtermism.
Two researchers at the RAND Corporation recently argued for a related idea. From our Future Matters summary:
T20 Brasil | T20 BRASIL CALL FOR POLICY BRIEF ABSTRACTS: LET’S RETHINK THE WORLD
The T20 Brasil process will put forward policy recommendations to G20 officials involved in the Sherpa and Finance tracks in the form of a final communiqué and task forces recommendations.
To inform these documents, we are calling upon think tanks and research centres around the world – this invitation extends beyond G20 members – to build and/or reach out to their networks, share evidence, exchange ideas, and develop joint proposals for policy briefs. The latter should put forward clear policy proposals to support G20 in addressing global challenges.
Selection criteria
Policy briefs must be related to the 36 subtopics that have been selected based on (i) the suggestions received from more than 100 national and foreign think tanks and research centres that have already expressed their interest in engaging with the T20 Brasil process and activities and (ii) the three priorities spelt out by the G20 Brazil presidency. These subtopics are organised under the six Task Forces themes.
My research group is designing a course on Global Risks for academic students in Brazil. I am looking for syllabi and teaching materials that could help inspire us. Right now I am using the WEF report, the Global Challenges report, the Legal Topics in Effective Altruism |and taking a look at the more practical topics in teaching materials from GPI. But I would like to see something from CSER, maybe? Anyone has any tips?
There might be something useful here: https://forum.effectivealtruism.org/posts/Y8mBXCKmkS9eBokhG/ea-syllabi-and-teaching-materials
Send me a DM if you’re interested, I’d be happy to provide a bunch of resources and to put you in contact with some people who could help send a bunch of resources
Let me share SBTi’s requests for feeddback:
Financial Institutions Net-Zero (FINZ) Standard
Experts from the finance sector, academia, and civil society worldwide are invited to review and provide feedback on the Draft Financial Institutions Net-Zero (FINZ) Standard. The public consultation survey will be open until September 30.
The primary aims of this consultation survey are to gather input from external stakeholders on the FINZ Standard—Consultation Draft v0.1, with particular focus on:
The clarity
Specific approaches to:
Evidencing entity-level commitments and leadership
Determining and identifying exposure and portfolio emissions
Portfolio climate alignment target
Emissions-intensive sector targets
Reporting
The SBTi’s direction of travel regarding financial institutions
Areas of support and improvement
Complete the survey now and contribute to the development of this essential standard.
The SBTi will also host three in-person workshops at Climate Week NYC and in London to gather expert insight on two important topics: neutralization and net-zero finance.
The workshops will be held on the following dates:
The role of carbon dioxide-removal in corporate net-zero transitions: New York City, September 23 | 3:00-6:30pm ET
Financial Institutions Net-Zero Standard Consultation: New York City, September 25 | 3:30-5:30pm ET
The role of carbon dioxide-removal in corporate net-zero transitions: London, October 8 | 2:30-6:00pm BST
Experts in each field are invited to participate by registering their interest. The precise locations will be shared with selected attendees. Register your interest here.
Scope 3 Discussion Paper feedback form
The SBTi is in the process of revising its Corporate Net-Zero Standard and one of the channels through which stakeholders are encouraged to engage is via the Scope 3 Discussion Paper feedback form. The Scope 3 Discussion Paper outlines the SBTi’s initial thinking on potential changes being considered for scope 3 target setting, including key principles and concepts.
In case anyone is interested, Peter Turchin will show up on Monday in a study group I joined
The Sciences of Ethics and Political Philosophy Reading Group
Disentangling the evolutionary drivers of social complexity: A comprehensive test of hypotheses
Peter Turchin
Monday, November 13
2 PM [WET/UTC]
Online
In this session, the group will discuss the paper by Peter Turchin et al. (2022), “Disentangling the evolutionary drivers of social complexity: A comprehensive test of hypotheses” (Science Advances, 8(25). DOI: 10.1126/sciadv.abn3517). Session with the confirmed presence of Peter Turchin.
Anyone interested in participating can send an email to Filipe Faria: filipefaria@fcsh.unl.pt.
Two “non-spoilers” for the movie Oppenheimer
Since the Bulletin of the Atomic Scientists and the Elders have been talking about this lately…
1) “Now I become Death, the destroyer of worlds”
The famous passage from Bhagavad Gita (BG), the Hindu religious epic. It suggests that Nolan is associating Oppie with the terrible form of Vishvaruppa – call this the “promethean” interpretation. But Oppie is actually more similar to prince Arjuna: the hero with a crisis of conscience who doesn’t want to join the battlefield of Kurukshetra because it will bring incontrollable destruction—but ends up doing it anyway, because that’s his destiny, as explained by Krishna / Vishnu, the “destroyer of worlds”. This “fatalistic” interpretation is reinforced by other scenes – e.g., Oppie’s visions of destruction, and a conversation where President Truman basically tells Oppie that he’s not that important…
2) Fermi paradox
Enrico Fermi, one of the brightest among so many geniuses on screen, doesn’t have enough screen time to state his famous paradox. Given that the universe is 13.7 bi years old, and that there are so many stars in the galaxy, and certainly many of them are able to evolve intelligent life just like ours… where’s everyone? certainly, we should be seeing evidence of alien life somewhere by now—like radio waves, space structures, or a party invitation. So, why this silence? Are aliens avoiding us?
One of the main explanations is that life might be self-defeating: as technology progress, the capacity of a species to destroy itself increases faster than the capacity to mitigate this risk.
So, ok, this movie is astonishing… but dear Chris Nolan, if you ever consider to extend it or turn it into a series… there are many things you might want to do. But two short scenes explaining for the viewer (1) the BG’s quotation, and (2) Fermi’s paradox would greatly improve the understanding of one the tenets of the movie—Oppie’s concern that they may start an unstoppable “chain reaction that’ll consume the world”
I thought it was a good movie, but was sad at how little it focused on:
The actual making of the bomb
The attempts of scientists to influence the politics of whether and how to use it
Moral regret
Is this a setback in animal welfare laws? https://www.publico.pt/2023/01/18/sociedade/noticia/ministerio-publico-pede-inconstitucionalidade-norma-lei-maus-tratos-animais-2035566 I was surprised that Portuguese constitutional legal doctrine prevented criminalizing torturing animals
https://www.tribunalconstitucional.pt/tc/acordaos/20210867.html There are quite definitive precedents
Global Ultra High Net Worth Individuals fell by 6% this year, according to Wealth-X—after steady increases in the last few years. Thus, I’m afraid the lack of funding from SBF may be the beginning of a trend—at least for community building and longtermism
CONSTITUTIONALIZING THE LONG-TERM FUTURE—ESTABLISHING INTERGENERATIONAL JUSTICE IN NATIONAL CONSTITUTIONS
On Sep 19 and Sep 20
This conference might interest longtermists in general (and people in Legal Priorities especifically)
https://ifilnova.pt/en/events/constitutionalizing-the-long-term-future/
Essay Prize of the Portuguese Philosophy Society—Philosophical papers on Artificial intelligence I’m not sure this will interest top researchers in AI philosophy, but maybe someone might see this as a low-hanging fruit: the “PRÉMIO DE ENSAIO DA SOCIEDADE PORTUGUESA DE FILOSOFIA” of this year is about the challenges AI poses for “the philosophical understanding of the human”.
“Que desafios pode a inteligência artificial colocar à compreensão filosófica do humano?” Link: https://www.spfil.pt/regulamento_premio_ensaio_spf deadline: feb 2023 prize: €3,000
Contigent conventions and the Tragedy of “Happy Birthday lock-in”
Will and Rob were talking about how the idea that there’s an inevitable convergence in moral values is wrong, and they mention some examples of contingencies. The first is the “Tragedy of ‘Happy Birthday’ lock-in”:
[Nuka zaria: change human trajectory by singing a different song for birthdays. My current suggestion is Weird Al Yankovic’s Happy Birthday, but maybe something for optimistic and simple would be nice, too.
(on the other hand, my new s-risk is that we shift to this other attractor that haunts Brazilian birthday parties)]
Their second example is neckties:
(Nuka zaria: let’s all shift to wearing bandanas or gaucho scarves, which are more convenient and useful.)
But then, Will says something that bothers me: “There’s just this fundamental arbitrariness.” One interpretation of this sentence is true: you can’t predict in advance what sort of fancy item of clothing an elite will adopt, or what melody will be the most executed in the world; i.e., it’s hard to predict what precise convention will emerge. But it’s certainly not true that you can’t explain them in hindsight (ok, hindsight is 20⁄20). Moreover, and here I take some risk, I think it’s not true that one can’t, in advance, identify features of what convention will be adopted – i.e., what counts as a salient point of attraction is not random.
The Hill sisters (a kindergarten principal and a composer) developed the song “Good morning to all” (a predecessor to “Happy Birthday to you”) as something that children would find easy to sing – i.e., they optimized for simplicity and used good old trial and error. Thus, it probably became so popular precisely because kids loved it, and because nowadays birthday parties are made for kids to feel special. That’s not what I would call “arbitrary”—it’s certainly not random.
I think the explanation for neckties is a bit different. The French liked the knotted neckerchiefs of Croatian mercenaries, and as Louis XIII and Louis XIV started wearing lace cravats, the nobility followed through. So what began as a useful piece of cloth to close your jacket ended up being copied by a foreign elite because it was seen as a fancy ornament; and then it evolved to more and more complex designs precisely to signal its decorative function, so distinguishing the upper class from the commoner. Again, I wouldn’t call it random.
This, I guess, is good news: arbitrariness is not so pervasive that we cannot consciously influence the future.
Might”A Beacon in the Galaxy” be our new “Three-body problem” (by Cixin Liu)
This paper proposes to transmit an “updated Arecibo-like” message to a star cluster near the galaxy’s center, a “selected region of the Milky Way which has been proposed as the most likely for life to have developed”.
Caleb Schwarf summarizes the issue here. Even if we set aside the possibility of conflict, maybe discussions on Space Governance should include how we might communicate with other types of intelligent life, like “at least don’t mention that we kill animals”.
Is inequality neglected in EA?
I am inclined to answer “no,” because I’ve seen the subject pop in some discussions on economics in this Forum… on the other hand, I’ve also seen some EAs disregard matters of economic distribution as secondary—if not an obstacle to economic progress. I remember seeing this subject figure in some critiques to the movement or mentioned en passant when the subject is billionaires’ philanthropy. Anyway, I’d like to document and share here some of my impressions resulting from a 30min search on the subject.
My attention was recently drawn to the matter thanks to this survey showing a consensus in IGM Forum (from *before* the pandemic – though the results were just released this week) that inequality of income and wealth is a danger to capitalism and to democracy. It fits CORE’s survey among students on “what is the most pressing problem economists should address”. Though it is evidence of the importance of the matter, it also suggests that it’s not neglected.
Of course, inequality is particularly relevant to studying and fighting poverty—as shown in this post from GWWC’ Hazell and Holmes. However, the subject probably impacts the trajectory of our societies, as this kind of neglected GPI working paper / forum post argues that “we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term” […] “because greater inequality could increase existential risk”.
Update: new IGM forum survey shows that less than 10% of the consulted economists disagree, and most of them agree, that the increasing share of income and wealth among the richest people in a number of advanced countries is:
a) giving significantly more political power to the wealthy (90% - weighted by confidence);
b) having a significantly negative effect on intergenerational social mobility (79%);
c) a major threat to capitalism (61%).
Does anyone have any idea / info on what proportion of the infected cases are getting Covid19 inside hospitals?
(Epistemic status: low, but I didin’t find any research on that, so the hypothesis deserves a bit more of attention)
1. Nosocomial infections are serious business. Hospitals are basically big buildings full of dying people and the stressed personel who goes from one bed to another try to avoid it. Throw a deadly and very contagious virus in it, and it becomes a slaughterhouse.
2. Previous coronavirus were rapidly spread in hospitals and other care units. That made South East Asia kinda prepared for possibly similar epidemics (maybe I’m wrong, but in news their medical staff is always in Hazmat suits, unlike most Health workers in the West). Maybe this is a neglected point in the successful approach in South East Asia?
3. I know hospitals have serious protocols to avoid it… but it takes only a few careless cleaning staff, or a patient’s relatives going to cafeteria, or a badly designed airflow, to ruin everything. Just one Hospital chain in Brazil concentrates most of deaths in Sao Paulo, and 40% of the total national.
https://www.theguardian.com/world/2020/mar/24/woman-first-uk-victim-die-coronavirus-caught-hospital-marita-edwards
Did anyone see the spread of Covid through nursing homes coming before? It seems quite obvious in hindsight—yet, I didn’t even mention it above. Some countries report almost half of the deaths from those environments.
(Would it have made any difference? I mean, would people have emphasized patient safety, etc.? I think it’s implausible, but has anyone tested if this isn’t just some statistical effect, due to the concentration of old-aged people, with chronic diseases?)
IMF climate change challenge “How might we integrate climate change into economic analysis to promote green policies?
To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change.” https://lnkd.in/dCbZX-B
Could we have catastrophic risk insurance? Mati Roy once suggested, in this shortform, that we could have “nuclear war insurance,” a mutual guarantee to cover losses due to nukes, to deter nations from a first strike; I dismissed the idea because, in this case, it’d not be an effective deterrent (if you have power and reasons enough to nuke someone, insurance costs won’t be among your relevant concerns). However, I wonder if this could be extrapolated to other C-risks, such as climate change—something insurance and financial markets are already trying to price. Particularly for C-risks that are not equally distributed (eg., climate change will probably be worse for poor tropical countries) and that are subject to great uncertainty… I mean, of course I don’t expect countries would willingly cover losses in case of something akin to societal collapse, but, given the level of uncertainty, this could still foster more cooperation, as it’d internalize and dillute future costs through all participant countries… on the other hand, ofc, any form insurance implies moral hazard, etc. But even this has a bright side, as it’d provide a legit case for having some kind of governance/supervision /enforcement on the subject… I guess I might be asking: Why don’t we have a “climate Bretton Woods?” (I guess you could apply the argument for FHI’s Windfall Clause here—it’s just that they’re concerned with benefits and companies, I’m worried about risks and countries) Even if that’s not workable for climate change, would it work with other risks? E.g., epidemics? (I think I should have done a better research on this… I guess either I am underestimating moral hazards and the problem of making countries cooperate, or there’s a huge flaw in my reasoning here)
I no longer endorse this comment because, since then, I found out that there’s a lot of research on internalising climate change externalities—and that Weitzman (2012) and others present mitigation as akin to insurance. I still wonder how much of this line of reasoning could extrapolate to other GCR.
It turns out that I changed my mind again. I don’t see why we couldn’t establish pigouvian taxes for (some?) c-risks. For instance, taxing nuclear weapons (or their inputs, such as nuclear fuel) according to some tentative guesstimate of the “social cost of nukes” would provide funding for peace efforts and possibly even be in the best interest of (most of?) current nuclear powers, as it would help slow down nuclear proliferation. This is similar to Barratt et al.’s paper on making gian of function researchers buy insurance.
Is there anything like a public repository / document listing articles and discussions on social discount rates (similar to what we have for iidm)? (I mean, I have downloaded a lot of papers on this—Stern, Nordhaus, Greaves, Weitzman, Posner etc. - and there many lit reviews, but I wonder if someone is already approaching it in a more organized way)
Why aren’t social discount rates object of political debates? I mean, this subject is not more complex than other themes in legislation and policy.
Future of Life Institute is looking for translators!
(Forwarded from FLI’s Newsletter)
The outreach team is now recruiting Spanish and Portuguese speakers for translation work!
The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet.
We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term.
https://lnkd.in/d5YqX-h
For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.
Not super-effective, but given Sanjay’s post on ESG, maybe there are people interested:
Ethics and Trust in Finance 8th Global Prize
The Prize is a project of the Observatoire de la Finance (Geneva), a non-profit foundation, working since 1996 on the relationship between the ethos of financial activities and its impact on society. The Observatoire aims to raise awareness of the need to pursue the common good through reconciling the good of persons, organizations, and community.
[...]
The 8th edition (2020-2021) of the Prize was officially launched on 2 June 2020. The deadline for submissions is 31 May 2021. The Prize is open to people under the age of 35 working in or studying finance. Register here for entry into the competition. All essays submitted to the Prize are assessed by the Jury, comprising academics and professional experts.
How to register • Ethics & Trust in Finance
ethicsinfinance.org • 1 min de leitura
Fill the form expressing your interest and we will send you the rules of the competition.
Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?
If wild animals have bad lives on net, then indiscriminately increasing wild animal populations is bad under any plausible theory of population ethics.
Obviously. But then, first, Effective Environmentalists are doing great harm, right? We should be arguing more about it. On the other hand, if your basic welfare theory is hedonistic (at least for animals), then one good long life compensates for thousands of short miserable ones—because what matters is qualia, not individuals. And though I don’t deny animals suffer all the time, I guess their “default welfare setting” must be positive if their reward system (at least for vertebrates) is to function properly. So I guess it’s more likely that we have some sort of instance of the “repugnant conclusion” here. Ofc, this doesn’t imply we shouldn’t intervene on wild environments to reduce suffering or increase happiness. What is at stake is: U(destroying habitats) > U(restoring habitats)
This is something interesting that I’ve been thinking about too, as someone who identifies as an environmentalist and who cares about animals. I would say most mainstream environmentalists promote rewilding but it’s not that common with Effective Environmentalism from what I’ve seen so far. You might say it gets lumped in with afforestation but that isn’t exactly rewilding nor that popular within EE anyway. Certainly the issue of more wild animal suffering is one I’ve raised when talking to less-EA aligned folks about rewilding and that’s not gone down well but I haven’t seen it discussed much in EE spaces.
Good point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good—even if it’s not immediately effective.
Does your feeling that the default state is positive also apply to farm animals? Their reward system would be shaped by aritifical selection for the past few generations, but it is not immediately clear to me if you think that would make a difference.
First, it’s not a feeling, it’s a hypothesis. Please, do not mistake one for the other. It could apply to them if they were not observed to be under stress conditions and captivity, and in behaviors consistent with psychological suffering—like neurotic ticks, vocalization or apathy. (Tbh, I don’t quite see your point here, but I guess you possibly don’t see mine, either)
IMF Blogpost: Social Repercussions of Pandemics—By Philip Barrett, Sophia Chen, and Nan Li
Or something Peter Turchin would agree with:
‘Good’ news: as expected, as real interest rates fall, so do SDR, increasing the social cost of carbon. (not novelty, ok, but monetary policy-makers explicitly acknowledging it seems to be good)
Bad news: of course, it still seems to be higher than a normative SDR based on time-neutrality.
Legal personality & AI systems
From the first draft of the UNESCO Recommendation on AI Ethics:
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of “AI DAO”—i.e., from creating a legal entity totally implemented by an autonomous system. This doesn’t seem reasonable; after all, what is company if not some sort of artificial agent?
Why didn’t we have more previous alarm concerning the spread of Covid through care and nursing homes? Would it have made any difference? https://www.theguardian.com/world/2020/may/16/across-the-world-figures-reveal-horrific-covid-19-toll-of-care-home-deaths
Does anyone know or have a serious opinion / analysis on the European campaign to tax meat? I read some news at Le Monde, but nothing EA-level seriousness. I mean, it seems a pretty good idea, but I saw no data on possible impact, probability of adoption, possible ways to contribute, or even possible side-effects?
(not the best comparison, but worth noting: in Brazil a surge in meat prices caused an inflation peak in december and corroded the governement’s support—yeah, people can tolerate politicians meddling with criminals and fascism, as long as they can have barbecue)
I was reading this recommended book and wondering how much of the late changes in our world is due to the demographic transitions—i.e., boomers. We know the population pyramid shape affects unemployment rates, wealth concentration (morever, think about how income predicts life expectancy, at least in very unequal countries—so one can expect a higher proportion of wealthier individuals in old age), and maybe even increasing health costs and votes—e.g., I just confirmed that, in Brazil, opinions about the government among young and old people are symmetrically opposite.
Idk what to infer from here. It seems to me there’s an elephant in the room: I read a lot about economics, philosophy and politics, and I’ve seen almost no mention of it but for discussions over one of those topics alone—never something concerning all of them. But I do think this should interest EAs, because much of our economic and political theory fails to account for an aging population—something quite remarkable in human history. So, I’d appreciate any tip to read something that takes demography seriously (except for Peter Turchin, whom I already follow).
Time to cancel my Asterisk subscription?
So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?
Obelus seems to be the organizational name under which Asterisk is registered—both the asterisk and the obelus are punctuation symbols so I highly doubt that Obelus exists separately from Asterisk.
Charging readers is probably an attempt to be financially independent of EV, which is a worthy goal for all EA organizations and especially media organizations that may have good cause to criticize EV at some point.
The eggs and milk quip is just a quip about their new prices; I don’t understand what’s offensive about it.
The California issue is weird to me too.
[Conflict note: writing an article for Asterisk now]
The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.
I see, fair enough.
If you previously liked the magazine these seem like relatively weak reasons to cancel it.
FWIW EV has been off-boarding its projects, so it isn’t surprising that Asterisk is now nested under something else. I don’t know anything about Obelus Inc.
You should cancel if you think it’s not worth the money. The other reasons seem worse.
Like Karthik, I don’t really understand what is so terrible about this, but I agree that the California edition is at least strange. It’s interesting how many of the ideas central to EA originate from California. While exploring the origin stories of these ideas is intriguing, I would be much more interested in an issue that explores ideas from far outside that comfort zone and see what they can teach us.
However, I’m not an editor and don’t think I’d make a good one either 😅
FWD: Invitation to the Future Generations Initiative Launch Event
We are delighted to extend an official invitation to you for the official launch of the Future Generations Initiative, which will take place on February 21, from 16.00 to 18.00 at Atelier 29 - Rue Jacques de Lalaing 29, 1000 Brussels, and will also be streamed online.
To confirm your attendance, kindly fill out this short registration form by Monday, February 19, at 18.00 CET.
There is an urgent need for the EU to embed the rights of Future Generations in its decision-making processes. However, a model for such representation is currently lacking.
A diverse group of NGOs is working together to convince decision makers that the time to act for Future Generations is now. On February 21, we will launch our coalition to promote this important issue as we approach the EU elections and the next political cycle begins.
You can find the event agenda by following this link.
By completing the registration form, you have the option to attend either in person or virtually.
The event will feature the presentation of the Future Generations proposal and policy demands, and a reception will follow.
Please do not hesitate to reach out to marco@thegoodlobby.eu should you have any questions.
Well, if you feel nad about SBF & EA, or about GW losing U$900 mi to frauds [edit: oooups, as Tobias remarked “GiveWell didn’t lose $900 million to fraud. GiveDirectly lost $900′000 to fraud.”], think about how Red Cross lost $500 million in Haiti: https://www.propublica.org/article/how-the-red-cross-raised-half-a-billion-dollars-for-haiti-and-built-6-homes
GiveWell didn’t lose $900 million to fraud. GiveDirectly lost $900′000 to fraud.
OMG thanks for this. My bad. I edited the original to contemplate this.