Brazilian legal philosopher and financial supervisor
You’re welcome. Plz, write a post (even if a shortform) about it someday.Something that attracts me in this literature (particularly in Scheffler) is how they can pick different intuitions that often collide with premises / conclusions of reasons based on something like the rational agent model (i.e., VnM decision theory). I think that, even for a philosophical theorist, it could be useful to know more about how prevalent are these intuitions, and what possible (social or psychological) explanations could be offered for them. (I admit that, just like the modus ponens of one philosopher might be the modus tollens of the other, someone’s intuition might be someone else’s cognitive bias)For instance, Scheffler mentions we (at least me and him) have a “primitive” preference for humanity’s existence (I think by “humanity” he usually means rational agents similar to us—being extinct by trisolarans would be bad, but not as bad as the end of all conscious rational agents); we usually prefer that humanity exists for a long time, rather than a short period, even if both timelines have the same amount of utility—which seems to imply some sort of negative discount rate of the future, so violating usual “pure time preference” reasoning. Besides, we prefer world histories where there’s a causal connection between generations / individuals, instead of possible worlds with the same amount of utility (and the same length in time) where communities spring and get extinct without any relation between them—I admit this sounds weird, but I think it might explain my malaise towards discussions on infinite ethics.
I was Reading about Meghan Sullivan “principle of non-arbitrariness,” and it reminded me Parfit’s argument against subjectivist reasoning in On What Matters… but why are philosophers (well, and people in general) against arbitrariness? I mean, I do agree it’s a tempting intuition, but I’ve never seen (a) a formal enunciation of what counts as arbitrary (is “arbitrary” arbitrary?), and (b) an a priori argument against. Of course, if someone’s preference ordering varies totally randomly, we can’t represent them with a utility function, and perhaps we could accuse them of being inconsistent. But that’s not what philosophers’ examples usually chastise: if one has a predictable preference for eating shrimps only on Friday, or disregards pain only on Thursday, there’s no instability here – you can represent it with a utility function (having time as a dimension).
There isn’t even any a priori feature allowing us to say that is evolutionarily unstable, since this could only be assessed when we look at whom our agent will interact with. Which makes me think that arbitrariness is not a priori at all, of course – it depends on social practices such as “giving reasons” for actions and decisions (i don’t think Parfit would deny that; idk about Sullivan). There might be a thriving community of people who only love shrimp on Friday, for no reason at all; but, if you don’t share this abnormal preference, it might be hard to model their behavior, to cooperate with them—at least, in this example, when it comes gastronomic enterprises. On the other hand, if you can just have a story (even if kinda unbelievable: “it’s a psychosomatic allergy”) to explain this preference, it’s ok: you’re just another peculiar human. I can understand you now; your explanation works as a salience that allows me to better predict your behavior.
I suspect many philosophical (a priori-like) intuitions depend on things like Schelling points (i.e., the problem of finding salient solutions for social practices people can converge to) than most philosophers would admit. Of course, late Wittgenstein scholars are OK with that, since for them everything is about forms of life, language games, etc. But I think relativistic / conventionalist philosophers unduly trivialize this feature, and so neglect an important point: whatever counts as arbitrary is not, well, arbitrary – and we can often demonstrate that what we call “arbitrary” is suboptimal, inconsistent with other preferences or intuitions, or hard to communicate (and so a poor candidate for a social norm / convention / intuition).
IMO, the best thing I’ve seen lately, for technical & non-tech people, would be The Alignment Problem, by Brian Christian (a.k.a. the “most human human”)
IMF climate change challenge
“How might we integrate climate change into economic analysis to promote green policies?
To help answer this question, the IMF is organizing an innovation challenge on the economic and financial stability aspects of climate change.”
Congrats! I’ll gladly listen to your interview.
I guess you already have a bunch of questions prepared… I have a peculiar curiosity / interest in hearing Sachs talk about how a warmer climate might impact economic development. I think he could summarize his own view, then conflicting opinions, and draw conclusions about future impacts of climate change.
I guess Samuel Scheffler’s last book has a little bit of them all (I haven’t read it yet). And Korsgaard makes a persuasive Kantian case about the disvalue of human extinction.
Thanks for the post. I’m convinced about the case for extrapolating from UN SDGs to GCRs, and I think stating it explicitly is relevant because attention is a scarce resource: companies and governments often use SDGs as a focal point when they want to signal virtue—public companies might even be required to explicitly state what SDGs they are aiming at in their sustainability reports.
I wonder what other areas have failed to get into SDGs—e.g., there’s absolutely no concern for animal welfare, as the goals and targets are explicitly worded in conservationist terms. Most material I’ve read about this is limited to argue that animal welfare and SDGs are compatible—even this call for papers from MDPI (due on 30 jun), which might interest someone doing research on the area.
Could we have catastrophic risk insurance?
Mati Roy once suggested, in this shortform, that we could have “nuclear war insurance,” a mutual guarantee to cover losses due to nukes, to deter nations from a first strike; I dismissed the idea because, in this case, it’d not be an effective deterrent (if you have power and reasons enough to nuke someone, insurance costs won’t be among your relevant concerns).
However, I wonder if this could be extrapolated to other C-risks, such as climate change—something insurance and financial markets are already trying to price. Particularly for C-risks that are not equally distributed (eg., climate change will probably be worse for poor tropical countries) and that are subject to great uncertainty…
I mean, of course I don’t expect countries would willingly cover losses in case of something akin to societal collapse, but, given the level of uncertainty, this could still foster more cooperation, as it’d internalize and dillute future costs through all participant countries… on the other hand, ofc, any form insurance implies moral hazard, etc. But even this has a bright side, as it’d provide a legit case for having some kind of governance/supervision /enforcement on the subject… I guess I might be asking: Why don’t we have a “climate Bretton Woods?”
(I guess you could apply the argument for FHI’s Windfall Clause here—it’s just that they’re concerned with benefits and companies, I’m worried about risks and countries)
Even if that’s not workable for climate change, would it work with other risks? E.g., epidemics?
(I think I should have done a better research on this… I guess either I am underestimating moral hazards and the problem of making countries cooperate, or there’s a huge flaw in my reasoning here)
Is there anything like a public repository / document listing articles and discussions on social discount rates (similar to what we have for iidm)?
(I mean, I have downloaded a lot of papers on this—Stern, Nordhaus, Greaves, Weitzman, Posner etc. - and there many lit reviews, but I wonder if someone is already approaching it in a more organized way)
I was wondering… We have (private) pension funds for children. Could / should we make it more widespread (maybe even mandatory)? Could we have government-sponsored funds? Parents (with government’s help) would save resources in a fund that could only be used by their offspring when they came of age; plus, unlike current pension funds I know, they could be able to use it as collateral, or to pay tuition, or open a business, or maybe even transfer it to another pension fund...For a longtermist, the pros are: it would increase overall savings (does it? or people will just divert resources from other funds?), transfer wealth to new generations (inequality of wealth between generations concerns me almost as much as possible inequalities of political power), improve intergenerational cooperation… Of course, this can be said for sovereign funds, too, but I see there might be some advantage in having individual accounts (so sidestepping things like tragedy of commons). I’m not very confident, though.
That’s a survey I’d like to see a top longtermist EA aligned psy-scholar perform ;)
is that what you’re concerned with? I am trying to find out what’s this blue mist trail on the right. It looks like Earth’s become a comet
Well, you’re right that intergenerational cooperation lacks straight reciprocity… but we do have chains of cooperation that extend across time and often depend on the expectation that future people will sustain it—e g., think about pension funds and longterm debt, or maybe even just plain cultural transmission
I think Tarsney is awesome in this episode… but maybe missed two opportunities here:
i. The Berry Paradox is super cool, but the Paradox of the Question is equally addictive, and basically can be seen as a joke on Global Priorities studies. But yeah, some people say it’s not so paradoxical after all…
ii. one can also look at the temporal asymmetry as a problem affecting intergenerational cooperation: if you don’t consider the interests of antecessors as (equally) important, then you can expect your successors will do the same to you, and you have fewer reasons to invest on the future. Even if you do have something like altruistic preferences towards future people, that preference is irrelevant for them. (Actually, I’m sort of surprised about how rare contractualist-like accounts of intertemporal justice are in EA literature—except for Sandberg’s piece on Rawls)
Why aren’t social discount rates object of political debates? I mean, this subject is not more complex than other themes in legislation and policy.
Your “star systems” point reminds me another problem which seems totally absent in this whole discussion—namely, things like agency conflicts and single-points-of-failure. For instance, I was reading about Alcibiades, and I’m pretty sure he was (one of) the most astonishing men alive in his age and overshadowed his peers- brilliant, creative, ridiculously gorgeous, persuasive, etc. Sorry for the cautionary tale: but he caused Athens to go to an unnecessary war, then defected to Sparta, & defected to Persia, prompted an oligarchic revolution in his homeland in order to return… and people enjoyed the idea because they knew he was awesome & possibly the only hope of a way out… then he let the oligarchy be replaced by a new democratic regime of his liking, became a superstar general who changed the course of the war, but then let his subordinate protégé lose a key battle because of overconfidence… and finally just exiled in his castle while the city lost the war.I think one of the major advancements of our culture is that our institutions got less and less personal. So, while we are looking for star scientists, rulers, managers, etc. (i.e., a beneficious type of aristocracy) to leverage our output, we should also solve the resilience problems caused by agency conflicts and concentrating power and resources in few “points-of-failure”.(I mean, I know difference in perfomance is a complex factual question per se, without us having to worry about governance; I’m just pointing out that, for many relevant activities where differences in performance will be highlighted the most, we’re likely to meet these related issues, and they should be taken into account if your organisation is acting based on “differences in performance are huge”)
The Global Catastrophic Risk Institute is looking for collaborators and advisees!
The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations. Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.Find more details here!
Future of Life Institute is looking for translators!(Forwarded from FLI’s Newsletter)The outreach team is now recruiting Spanish and Portuguese speakers for translation work!The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet.We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term.https://lnkd.in/d5YqX-hFor more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.
Thanks, I didn’t know Algosphere.
Btw, I saw there are two allies in Sao Paulo. I’d like to get in touch with them, if that’s a possibility ;)
Is there anything like EA Consulting for charities?
I mean, we do have:
(a) meta-charities (e.g., GW, SoGive...) which evaluate projects and organizations;
(b) charity incubators (Charity Entrepreneurship...), which select and incubate ideas for new EA projects;
(c) recommended charities that provide consulting services for policy-makers, such as Innovation in Government Initiative;
(d) some EAs working in consulting firms (EACN) - which, among other things, aim to nudge corporations and co-workers into more effective behavior.But I didn’t find any org providing to non-EA charities consulting services aiming to make them more effective. Would it be low-impact? Or is it a low-hanging fruit?
One might think that this is basically the same job GW already does… Well, yeah, I suppose you would actually use a similar approach to evaluate impact, but it’s very different to provide to a charity recommendations that aim to help them achieve their own goals. This would be framed as assistance, not as some sort of examination; while GW’s stakeholders are donors, this “consulting charity” would work for the charities themselves. Besides, in order to prevent conflicts of interest, corporations often use different firms to provide them auditting (which would be akin to charity evaluation—i.e., a service that ultimately is concerned with investores) and consulting services (which is provided to the corporation and its managers).This could be particularly useful for charities in regions that lack a (effective) charity culture.Update: an example of this idea is the Philanthropy Advisory Fellowship sponsored by EA Harvard—which has, e.g., made recommendations to Arymax Foundation on the best cause areas to invest in Brazil. But I believe an “EA Consulting” org would provide other services, and not only to funders.