Open Thread #43
Use this thread to post things that are awesome, but not awesome enough to be full posts. Consider giving your post a brief title to improve readability.
Use this thread to post things that are awesome, but not awesome enough to be full posts. Consider giving your post a brief title to improve readability.
I’ve been toying around with the following:
There are two motivations for donating money – egotistic (e.g. it feels good to do) & altruistic (e.g. other people are better off)
The egotistic motivation is highly scope insensitive – giving away $500 feels roughly as good as giving away $50,000
Probably also scope insensitive qualitatively – giving $5,000 to a low-impact charity feels about as good as giving $5,000 to an effective charity (especially if you don’t reflect very much about impact)
This scope insensitivity is baked in – knowing about it doesn’t make it go away
EA orgs sometimes say that giving effectively will make you happier (e.g. 80k, e.g. GWWC)
These arguments ignore the scope insensitivity of the egotistic motivation – donating some money to charity will probably make you happier than not donating any at all. It’s less clear that donating more money to charity will make you happier than donating some (and especially unclear that the donation <> happiness link scales anything close to linearly)
Ergo, EA should stop recommending effective giving on egotistic grounds, and probably even encourage people to not do effective giving if they’re considering it because they want to be happier (related)
If the above is true, effective giving won’t make you much happier than low-impact giving, and donating large amounts won’t make you much happier than donating small amounts
e.g. $100 to GiveDirectly feels about as good as $1,000 to GiveDirectly; e.g. saving one (statistical) life via AMF feels about as good as saving two (statistical) lives via AMF
Advocating for effective giving on egotistic grounds (e.g. “it will make you happier”) is sorta a false promise
My impression FWIW is that the ‘giving makes you happier’ point wasn’t/isn’t advanced to claim that the optimal portfolio for one’s personal happiness would include (e.g.) 10% of charitable donations (to effective causes), but that doing so isn’t such a ‘hit’ to one’s personal fulfilment as it appears at first glance. This is usually advanced in conjunction with the evidence on diminishing returns to money (i.e. even if you just lost—say − 10% of your income, if you’re a middle class person in a rich country, this isn’t a huge loss to your welfare—and given this evidence on the wellbeing benefits to giving, the impact is likely to be reduced further).
E.g. (and with apologies to the reader for inflicting my juvenilia upon them):
There are diminishing returns to money buying happiness, but it looks like they set in after pretty high incomes (starting at $95,000, and even higher if you live in a wealthy area).
So donating more on the margin when your total income is less than $95,000 USD seems to trade off directly against your happiness.
One can probably realize a lot of the egotistic benefit of donating by giving small amounts, e.g. $30 / month to GiveDirectly.
Update: published an expanded version of this as a standalone post. Includes arguments from survey data as to not be entirely composed of armchair philosophizing.
I would love to see a canonical post making this argument, conflating EA with the benefits of maxing out personal warm fuzzies is one of my pet peeves.
Usually I would agree with you, but I think within the EA community people have strong egoistic motivation to make “effective” donations. Your reputation is related to giving effectively.
Huh, I feel like reputation within EA is mediated more by things like how insightful one seems in forums & how active one is in organizing community events.
I don’t know how much most EAs I know give. (I basically only know about people who published donation reports.)
Also there’s this effect where people who are doing a lot of work on the ground tend to accrue less reputation than people who are very active in community building, just by the nature of their work. e.g. compare New Incentives to 80k.
I definitely except that there are people who will lose out on happiness from donating.
Making it a bit more complicated, though, and moving out of the area where it’s easy to do research, there are probably happiness benefits of stuff like ‘being in a community’ and ‘living with purpose’. Giving 10 % per year and adopting the role ‘earning to give’, for example, might enable you to associate life-saving with every hour you spend on your job, which could be pretty positive (I think that feeling that your job is meaningful is associated with happiness). My intuition is that the difference between 10 % and 1 % could be important to be able to adopt this identity, but I might be wrong. And a lot of the gains from high incomes probably comes from increased status, which donating money is a way to get.
I’d be surprised if donating lots of money was the optimal thing to do if you wanted to maximise your own happiness. But I don’t think there’s a clear case that it’s worse than the average person’s spending.
Makes sense, though I think you can realize most of the “being in a community” benefit without making large donations.
I’ll consider making this rigorous enough to be a standalone post if there’s sufficient interest.
Why don’t we have a kind of “Effective Thesis Prize”?
I know there have been prizes on particular subjects, such as the philosophy quarterly essay prize in 2016 (who won it?). But has anyone tried an open general prize, accepting applicants from any area, anywhere? Would it be too expensive? (I don’t think so: the $$ could be little, since phd candidates don’t need additional incentives to improve their thesis)
Would it it hard to organize? (Maybe a little bit, but there would be time...)
Pros: it’d be a simple way to propagate EA ideas and the Effective thesis tool. It’d be useful to elicit information, and maybe to find significative new contributions...
Actually, I think there are many more pros, and I’m considering to try something like that in Brazil (where EA movement is just beginning). So I’d really appreciate some tips about possible CONS.
Risks of an Effective Thesis Prize
Poor quality:
-The entries (and winning dissertation) are low quality and make EA look bad.
-The judge(s) don’t have sufficient expertise to evaluate the submissions properly.
-The person who organizes the competition doesn’t have the skills to run it properly (eg the prize is awarded very late or not at all).
Waste of resources compared to counterfactual:
-Thousands of dissertations could be entered. Even if you spent 5 minutes evaluating each, that’s 100+ hours of your time. Depending on what your time is worth, if you spent the time working instead of evaluating dissertations, you could potentially save a toddler’s life from malaria.
-It’s unlikely that announcing a prize now will impact current PhD students, who probably chose their topic years ago (although it could potentially influence new students).
-Are we sure that prizes influence dissertations? Is there any evidence base for this?
Thanks for your comment and your insights, that’s precisely what I wanted. I do agree we lack evidence about the effectiveness of prizes as a way to incent studies on specific subjects – but that’s not unsurmontable (if I don’t find a paper about it, I’ll try to compare keywords frequency in academic databases before and after the establishment of a similar prize to check if there’s a correlation). Also, I hadn’t considered a prize could result in a reputational risk – even if it’s unlikely, but it might necessary to hedge against it; one possibility is to grant the commitee the right to abstain from declaring a winner, if no one is found worthy.
Concerning the other mentioned obstacles, I don’t think they would hinder the main goals of such a prize – to foment the ideas of EA and to incent the study of EA causes. Assuming these causes are worth of my time (even more than saving a toddler’s life), so it’d not be a waste of time.
As risk mitigation, we could start by requiring that applicants provide a brief essay summarizing their research and arguing in their favor; then a crowd of blind-reviewers would use such essay (and other « cheap signals », such as abstracts, conclusions, etc.) to reduce the number of candidates to a small set, to be scrutinized by a comitee of proeminent scholars; maybe we could put the theses online and ask everyone for feedback (people could vote on the best thesis in this first phase). Also, we could mitigate reputational risks and problems of scale if, instead of a global institution such as the CEA, we had such a competition in a more restricted environment– maybe some institution in a small country, such as the Czech Republic (well, they created the Effective Thesis project, right ?). So, if something went wrong, it wouldn’t hit the whole EA movement.
Actually, Forethought launched this one week ago: https://www.forethought.org/undergraduate-thesis-prize
Why don’t we have an entry on EA (and on X-Risk) on Stanford Encyclopedia of Philosophy (or IEP, by the way)?
The best we have is a mention en passant of Singer’s “Most good you can do...” inside the “Altruism” entry. Really, it’s just: “If friendships and other loving relationships have a proper place in our lives even if they do not maximize the good, then sentiment is an appropriate basis for altruism. (For an opposing view, see Singer 2015.)” <https://plato.stanford.edu/entries/altruism/>
I mean, there’s even a whole new article on the Philosophy of Chemistry! It might not seem important, but SEP is one of the major preliminary sources for researches in Philosophy. It’d be an effective way “spreading the word” among philosophers outside Oxford & Berkeley communities.
(Funny thought: many EA philosophers are cited as major sources in SEP articles about Ethics, decision theory, AI… - but NOT EA)
I see that SEP’s editorial board has the responsibility of defining what’s being published and who’s gonna do it—but there must be something to do about (maybe coordinated mass requests of an EA article? or someone sending a complete article, according to item 3 in the Editorial Policies: ” ualified potential contributors may send to the Principal Editor or an appropriate member of the Editorial Board a preliminary proposal to write on an Encyclopedia topic, along with a curriculum vitae.” <https://plato.stanford.edu/info.html>
I guess all I’ve said so far applies to the Internet Encyclopedia of Philosphy, too.
(P.S.: If I’m getting boring with my frequent posts, please let me know)
Great idea! I don’t think mass requests are the way to go, though. I’ll bet if someone like Peter Singer, Will MacAskill, or Toby Ord sent them a proposal to write an article about EA, they’d accept. I sent Will a Facebook message to ask him what he thinks.
make a thread and we can try to mass email them
Sorry, I’m new here. You mean making a thread in this forum? I still don’t know how.
I came across a quote from biostatistician Andrew Vickers that I really like:
I can think of a number of caveats. For example, if you’re an amateur trying to conduct a statistical analysis of some phenomenon where no statistical analysis currently exists, maybe you should not let the perfect be the enemy of the good by suppressing your analysis altogether, if it means people will continue thinking about the phenomenon in an intuitive and non-data-driven way.
But I do think it’d be valuable for the EA community to connect with or create more EAs with a deep understanding of statistics. Improving my own statistical skills has been a big project of mine recently, and the knowledge I’ve gained feels very generally useful.
Curious what you’ve been doing to tool up on statistics.
Here are some resources I’ve been consuming:
https://micromasters.mit.edu/ds/
https://statlearning.class.stanford.edu/
https://smile.amazon.com/Introduction-Probability-Chapman-Statistical-Science/dp/1466575573/
https://smile.amazon.com/Statistical-Rethinking-Bayesian-Examples-Chapman/dp/1482253445/
https://www.coursera.org/learn/statistical-inferences
Thank you!
Does vegan advocacy really work in reducing global meat consumption? Has anyone tested it?
My point: despite the increasing number of vegans & reducitarians, global meat production & consumption has increased (important exceptions: US & EU). That’s a problem, since effective altruists defend vegetarianism etc. in order to reduce animal slaughter. Economic development aside (and the corresponding new demand for animal protein), I wonder if, in the long-term, markets adjust prices: thus, for each individual reducing meat consumption, there are many others who increase their consumption (because of falling prices due to decreasing demand), so leading the market to a new equilibrium.
So, my question: is there a way to test it? I imagine it could be done by a RCT: we could ask parts of a populations to stop eating meat for some time and measure if there is an observable effect.
in the last 30min, I found out ACE includes considerations about demand elasticity for animal products in its evaluations. It’s not the same as a RCT, but I believe it’s a good enough estimate. I’ll keep the post, though, in case anyone have similar doubts.
John: For future open threads, I’d recommend removing this line:
“This is also a great place to post if you don’t have enough karma to post on the main forum.”
The new Forum doesn’t have a karma restriction for creating your first post.
OK. I went ahead and removed it now, so the next person to create an open thread will copy/paste the correct message.
Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?
I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know this may be a conservative guess, but it’s the reasoning that is important here, not the numbers), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first “safe” option A, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains (like A) and uncertain losses (like B’). So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.
Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.
(I’m well aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different matter; however, IF this experiment confirmed the existence of such a bias, it could influence the latter, too.
I’m new here. Since I suspect someone has probably already made a similar question somewhere else—but I couldn’t find it, so sorry bothering you—I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities—the x-risk vs. safe causes. I’m not looking for karma—though you can’t have too much of it, right?)
Perhaps I should warn: ambiguity-aversion sensitivity to framing effects is contested by Voorhoeve et al. (philarchive.org/archive/VOOAAF); however, the authors recognize their conclusion goes against most of the literature.
Does anyone here have opinions on the higher quality labels for farm animal products, such as “Certified Humane”? (see here for a comparison of labels by ASPCA: https://www.aspca.org/shopwithyourheart/consumer-resources/meat-eggs-and-dairy-label-guide) I am particularly interested in egg labels.
Future Perfect put out an article on this recently.
Thanks! For eggs in particular, I’ve concluded that none of the standards require the producers to avoid buying hens from hatcheries that cull male chicks, or to ensure that they are culled in a particular way, or to let laying hens live out their lives once they’re beyond their prime laying age. Also that there are essentially no hatcheries that do not cull male chicks, but there is apparently a commitment from 95% of egg producers to stop culling by 2020 by using sex-selection technology to avoid creating male chicks altogether.