Undergraduate in Cognitive Science
Currently writing my thesis on genetic engineering attribution with deep learning under the supervision of Dr Oliver Crook at Oxford University
aaron_mai
However, even if we’d show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the “very repugnant conclusion”:
for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A.
(Credit to joe carlsmith who mentioned this on some podcast)
Thanks for the post!
I’m particularly interested in the third objection you present—that the value of “lives barely worth living” may be underrated.
I wonder to what extent the intuition that world Z is bad compared to A is influenced by framing effects. For instance, if I think of “lives net positive but not by much”, or something similar, this seems much more valueable than “lives barely worth living”, allthough it means the same in population ethics (as I understand it).
I’m also sympathetic to the claim that ones response to world Z may be affected by ones perception of the goodness of the ordinary (human) life. Perhaps, buddhists, who are convinced that ordinary life is pervaded with suffering, view any live that is net-positive as remarkably good.
Do you know if there exists any psychological literature on any of these two hypotheses? I’d be interested to research both.
I agree that it seems like a good idea to get somewhat familiar with that literature if we want to translate “longtermism” well.
I think I wouldn’t use “Langzeitethik” as this suggests, as you say, that longtermism is a field of research. In my mind, “longtermism” typically refers to a set of ethical views or a group of people/institutions. Probably people sometimes use the term to refer to a research field, but my impression is that this rather rare. Is that correct? :)
Also, I think that a new term—like “Befürworter der Langzeitverantwortung”—which is significantly longer than the established term, is unlikely to stick around both in conversation or in writing. “Longtermists” is faster to say and, at least in the beginning, easier to understand among EAs, so I think that people will prefer that. This might matter for the translation. It could be kind of confusing if the term in the new German EA literature is quite different from the one that is actually used by people in the German community
Thanks :)
Out of curiosity: how do you adjust for karma inflation?
This seems a bit inaccurate to me in a few ways, but I’m unsure how accurate we want to be here.
First, when the entry talks about “consequentialism” it seems to identify it with a decision procedure: “Consequentialists are supposed to estimate all of the effects of their actions, and then add them up appropriately”. In the literature, there is usually a distinction made between consequentialism as a criterion of rightness and a decision procedure, and it seems to me like many endorse the latter and not the former.Secondly, it seems to identify consequentialism with act-consequentialism, because it only refers to consequences of individual actions as the criterion for evaluation.
Red team: is it actually rational to have imprecise credences in possible longrun/indirect effects of our actions rather than precise ones?
Why: my understanding from Greaves (2016) and Mogensen (2020) is that this has been necessary to argue for the cluelessness worry.
Thanks! :) And great to hear that you are working on a documentary film for EA, excited to see that!
Re: EA-aligned Movies and Documentaries
I happen to know a well-established documentary filmmaker, whos areas of interest overlap with EA topics. I want to pitch him to work on a movie about x-risks. Do you have any further info about the kinds of documentaries you’d like to fund? Anything that’s not obvious from the website.
Hey! I wonder how flexible the starting date is. My semester ends mid-July, so I couldn’t start before. This is probably the case for most students from Germany. Is that too late?
Thanks for the post!
Does this apply at all to undergrads or graduate students who haven’t published any research yet?
Hey Pablo,
Thanks a lot for the answer, I appreciate you taking the time! I think I now have a much better idea of how these calculations work (and much more skeptical tbh because there are so many effects which are not captured in the expected value calculations that might make a big difference).
Also thanks for the link to Holdens post!
Hi Johannes!
I appreciate you taking the time.
”Linch’s comment on FP funding is roughly right, for FP it is more that a lot of FP members do not have liquidity yet”
I see, my mistake! But is my estimate sufficiently off to overturn my conclusion?
” There were also lots of other external experts consulted.”
Great! Do you agree that it would be useful to make this public?
“There isn’t, as of now, an agreed-to-methodology on how to evaluate advocacy charities, you can’t hire an expert for this.”
And the same ist true for evaluating cost-effectiveness analyses of advocacy charities (e.g. yours on CATF)?
”So the fact that you can be much more cost-effective when you are risk-neutral and leverage several impact multipliers (advocacy, policy change, technological change, increased diffusion) is hard to explain and not intuitively plausible.”
Sure, thats what I would argue as well. Thats why its important to counter this skepticism by signalling very strongly that your research is trustworthy (e.g. through publishing expert reviews).
“The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn’t necessarily delve into those or see if they had been interpreted correctly. “
>> Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn’t surprise me.
“Re making things public, that’s a bit trickier than it sounds. Usually I’d leave a bunch of comments in a google doc as I went, which wouldn’t be that easy for a reader to follow. You could ask someone to write a prose evaluation—basically like an academic journal review report—but that’s quite a lot more effort and not something I’ve been asked to do.”
>> I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes.
“it’s like you’re sending the message “you shouldn’t take our word for it, but there’s this academic who we’ve chosen and paid to evaluate us—take their word for it”.”
>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn’t interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert).
Hi Michael!
”You only mention Founders Pledge, which, to me, implies you think Founders Pledge don’t get external reviews but other EA orgs do.”
> No, I don’t think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn’t have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn’t find any report on their website about an external, expert review.
I think my argument here holds for any other similar organisation.
“This doesn’t seem right, because Founders Pledge do ask others for reviews: they’ve asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we’ve been happy to do, although we didn’t necessarily get into the weeds.”
>Cool, I’m glad they are doing it! But, if you say “we didn’t necessarily get into the weeds.”, does it count as an independent, in-depth, expert review? If yes, great, then I think it would be good to make that public. If no, the conclusion in my question/post still holds, doesn’t it?
I’m not sure, but according to Wikipedia, in total ~3 billion dollars have been pledged via Founders Pledge. Even if that doesn’t increase and only 5% of that money is donated according to their recommendations, we are still in the ballpark of around a hundred million USD right?
On the last question I can only guess as well. So far around 500 million USD have been donated via FoundersPledge. Founders Pledge exists for around 6 years, so on average around 85 million $ per year since it started. It seems likely to me that at least 5% have been allocated according to their recommendations, which makes an average of ~4 million USD per year. The true value is of course much higher because other people, who haven’t taken the pledge, are following their recommendations as well.
I actually think there is more needed.
If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.
I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:
you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally
If that’s true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?
Super interesting, thanks!
Cool idea to run this survey and I agree with many of your points on the dangers of faulty deference.
A few thoughts:
(Edit: I think my characterisation of what deference means in formal epistemology is wrong. After a few minutes of checking this, I think what I described is a somewhat common way of modelling how we ought to respond to experts)
The use of the concept of deference within the EA community is unclear to me. When I encountered the concept in formal epistemology I remember “deference to someone on claim X” literally meaning (a) that you adopt that persons probability judgement on X. Within EA and your post (?) the concept often doesn’t seem to be used in this way. Instead, I guess people think of deference as something like (b) “updating in the direction of a persons probability judgement on X” or (c)”taking that person’s probability estimate as significant evidence for (against) X if that person leans towards X (not-X)”?
I think (a) - (c) are importantly different. For instance, adopting someones credence doesn’t always mean that you are taking their opinion as evidence for the claim in question even if they lean towards it being true: you might adopt someones high credence in X and thereby lowering your credence (because yours was even higher before). In that case, you update as though their high credence was evidence against X. You might also update in the direction of someones credence without taking on their credence. Lastly, you might lower your credence in X by updating in someones direction even if they lean towards X.
Bottom line: these three concepts don’t refer to the same “epistemic process” so I think its good to make clear what we mean by deference.
Here is how I would draw the conceptual distinctions:
(I) deference to someones credence in X = you adopt their probability in X (II) positively updating on someone’s view = increasing your confidence in X upon hearing their probability on X (III) negatively updating on someones view = decreasing your confidence in X upon hearing their probability in X
I hope this comment was legible, please ask for clarification if anything was unclearly expressed :)