Open Thread 2
Welcome to the second open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts. Thanks to an upgrade by Trike Apps, each time you visit the open thread, new comments will now be highlighted!
Should we moderate ourselves to help grow the movement?
I’ve come across this idea before in the EA community and also thought about it a lot myself.
EA’s are known to do some pretty “out there” things to maximize their impact. Many of us give far more than 10%. I know someone who lives in a van and is a “freegan” to maximize the amount of cash they have to give. Personally, I live with my parents and have forgone overseas holidays to leave me extra money to give/save. I know that many people who ask me why think I’m crazy even after hearing my reasons.
What I worry about is that even if we don’t advocate things like this people will associate our behavior with effective altruism. Then they may think, ’well if that’s what being EA is then count me out”.
So I guess I’m asking, should we moderate ourselves while the movement grows? Should we live lives that are as normal as possible so that more people feel they would like to be part of the movement, so that more people feel they could become EA’s without sacrificing their current lifestyle?
The question is not going to be whether we should moderate ourselves- yes or no—but when and how much. Moderating yourself, consciously or unconsciously is part of being a functioning human.
Thanks for the feedback everyone!
I think I agree with most of your sentiments. Especially while EA is young and growing I see the importance of maintaining the image of EA as something that anyone can do while living a normal life.
Great question. We should definitely consider these reputational effects as they could be large.
I’d start by thinking about what portfolio might look best from the point of view of helping to grow the movement well. My guess is that this isn’t necessarily having everyone “as normal as possible”, but could well be in that direction from where we are right now.
Then when we’re thinking about deviations we can think about how large the reputational effect is and how large the benefit is. It’s actually pretty hard to estimate the size of reputational effect, but we should try. For example thinking about what the effect might be if the whole movement were doing it might be a reasonable first approximation. In some cases that probably underestimates the costs—if your behaviour is extreme even within the movement, it’s more likely to be picked up on and have a high reputational cost. In some cases that may overestimate the costs—e.g. if the whole movement were vegetarian that might appear weird and discourage growth, but having half the movement be vegetarian probably doesn’t.
It sounds to me that living in a van is likely to be erring too far on the side of ignoring reputation.
In your case, it might be that the best thing to do is to quietly continue not to take overseas holidays, but not talk about it much. Or to only take them occasionally. (Of course it could be that the not taking overseas holidays provides a useful talking point and helps more than it hurts—it isn’t obvious to me, but I’m glad you’re at least considering the question.)
I don’t think it’s a matter of reputation as much as a matter of socialization and network building. Humans are at their best when they’re interacting with other humans (generally speaking). If your actions based in EA motivations are hurting your personal relationships or your ability to socialize, either by constraining your living situation or by limiting your social interactions due to cost / time considerations, then they may be doing more harm than good. I think building a strong network of friends and colleagues is one of the highest-leverage things you can do, and shouldn’t be easily discounted for the sake of simply giving as much money as you can.
Similarly, while going overseas for vacation is expensive and bad for the environment, sometimes seeing another place or culture in real life can have a lot of altruism-related benefits.
I don’t mean this to disparage your particular life choices, but rather to say that a “typical” EA shouldn’t be expected to make the same choices, and it shouldn’t be implied that making those choices makes you more effective or more altruistic than somebody else who focuses more on network building and travel, for instance.
First of all, effective altruist organizations, ones that explicitly exist because there effective altruism exists as a social movement, make it their shared mission to make people aware that people can be effective altruists, while still being totally normal. Not, like, even relatively normal, or sort of normal, but not requiring you, yes you, to sacrifice anything major you wanted out of life[1].
The Life You Can Save, Giving What We Can, and 80,000 Hours exist to do this. The Centre of Effective Altruism incubated all these organizations, and now they’re starting a special project to build the effective altruism movement, and steer its public image, called Effective Altruism Outreach. It’s currently being led by Niel Bowerman. Your comment is one great concern that Effective Altruism Outreach was specifically started to handle.
Of course, as effective altruism grows, nobody wants it to become so diluted as a set of ideas that anyone can as validly call themselves an effective altruist as anyone else, without actually doing anything. So, the community itself must reach consensus on some standard, and each individual is responsible for holding themselves to it to maintain the integrity of effective altruism. Rob Wiblin covered this in his keynote address at the 2014 Effective Altruism Summit. As a shorthand, ‘anyone giving $10 to Oxfam’ was the hypothetical example of what an overly diluted effective altruism might look like.
The standard thus far seems to be 10% of lifetime income donated to the most effective charity (one can find). Now, this isn’t sufficient, so some caveats and distinctions have to be included, such as:
does this include income before or after taxes?
Personally, I would qualify this commitment with what each individual effective altruist honestly tries as hard as they can to figure out what the best charity is given their best estimates and their personal values, even as they differ from those of others. In practice, such research is difficult, so I would recommend looking to evaluations independent of one another, organizationally, and across disciplines, converging upon the same solutions. This is what effective altruism calls cluster thinking, and it’s an epistemology the effective altruist movement is turning toward.
Of course, the 10% commitment was chosen as a round number favored historically by the donation standard prescribed by various world religions. How arbitrary it really is, and if effective altruism should rethink it, may be a challenged posed by the possible future success of The Life You Can Save. If the The Life You Can Save, which only encourages people to pledge at least only 1% of their income, ever hits some critical mass in how much awareness it raises, and the amount of money from as many people as possible, that no other effective altruist organization had yet achieved, it may force effective altruism to rethink how it presents itself.
In a way, the already thousands of effective altruists, hopefully of whom as many as possible will continue to be effective altruists, will demonstrate a cluster-thinking approach to lifestyle design for effective altruists. Effective altruism is a voluntary movement in which, aside from the narrow standard, not yet well-defined, above, one can choose to live how they wish. The commonalities of what makes life more functional for effective altruists will come to light as time passes, so what’s ‘normal’ for effective altruism, while still being ‘normal’ for the rest of the world at large, will become apparent.
I wish more effective altruists would share their personal stories, and how they differ not only in their donations, but lifestyle choices, from others. Luckily, effective altruists don’t even need their own blogs to do so, as they can post them to this forum. I will indeed make a new thread next time encouraging others to do this.
[1] The lifestyle written about in that post was written by Jeff Kaufman about the effective altruist lives his wife and himself are leading. They’re earning to give, in addition to raising awareness of effective altruism by updating how they build their life with effective altruism in mind, without sacrifice. Since that article was written, Kaufman and Wise have increased how much of their joint income they give from 30% to 50%, while raising their first child. They may be the best existing example of how normal effective altruism can be for any middle class person.
Does anyone have any thoughts on whether the Ebola outbreak is a unique effective giving opportunity compared to better-studied issues like malaria and schistosomiasis? I tried to do a Fermi estimate here but I don’t trust it further than I can throw it.
I think that the Fermi estimate is a good start, but I am more suspicious that it might be a substantial underestimate of the cost-effectiveness.
The strongest case for the Ebola outbreak to be an outstanding giving opportunity seems to be that the outbreak might grow a long way, and intervening now could be an easy way to help containment and give outside agencies enough time to get a proper plan into action.
Perhaps there’s a story where we’re on route to discover a cure or vaccine, but we’ll hit 1 million deaths (say) before this happens, and it will still be on its exponential growth curve at that stage. Then it might be pretty cheap to slow the whole thing down by 1% today, but that could translate to 10,000 fewer deaths at the point where the cure or vaccine comes in.
I don’t really think that story has high likelihood, particularly as perturbations can lessen its impact—if Ebola spreads slower today, perhaps that will also slow down the efforts of people who would eventually deal with it; or if it’s no longer in exponential growth phase when we get a solution then the effect will be much smaller. But this looks like a scenario where the tail benefit could dominate, so I’d want to take the possibility seriously if looking at this.
If this is correct then early interventions could be quite a bit better (in expectation) than later ones. My best guess right now is that it’s still not good enough to target, though.
Are young people really more idealistic than older people? More young people attend protests but more older people participate in lobbying, fund political parties, and provide most funding for charities. Perhaps a large fraction of what is going on just relates to older people possessing different kinds of resources from young people. Do you agree or disagree?
Many young people I know basically treat protests as a recreational event.
This doesn’t surprise me. I have friends who treat protests very seriously, and though I can’t think of any, it wouldn’t surprise me if some of my friends treated them as a recreational event, either. Being activists, and philanthropists, ourselves, during, or prior, our lives as effective altruists, I imagine lots of us have friends who are involved in protest movements, to the point that we, or they, might call them(selves) ‘career activists’. That is, especially if you live a metropolitan area, some people will become involved in multiple public advocacy causes, and treat activism as part of their lifestyle, or a part-time job.
For people treating it more like a hobby than a part-time job as they integrate it into their lifestyle, they may think of it as a recreational event. For some activists, it’s easy for me to imagine protests being a recreational event and serious business, much like a ‘work function’ might be an office-sanctioned party with prospective clients present is for white-collar professionals.
In hindsight, it’d be interesting if there was greater discussion of ‘career activism’ on this forum, as people who live such a lifestyle are the closest set of people I’m aware of who claim to be dedicating their lives to doing good as seriously as effective altruists do, without actually being part of effective altruism. Of course, there is serious overlap between these two lifestyles as well.
I think old people just have more resources than young people, so they give less as a proportion of their resources.
Alternatively, you might think old people have had a lot of time to develop commitments to various causes, and so feel obligated to give more.
Back when I was involved with party politics, I heard someone mention that pensioners with basically unlimited free time were a really major asset for the campaigns of the older and more established parties.
I can’t find it right now, but Leah Libresco posted on Unequally Yoked lately about how she hypothesizes that her life would be richer from having a larger demographic catchment as part of her regular social life/circle. In between that, your above comment, and the idea that ‘life experience’ can be tabooed as ‘epic procedural knowledge gaps between you and your elders’, I’m pondering “why aren’t we getting more of the older generations to join effective altruism??”.
I don’t actually want effective altruism to have an implicit ageism bias, and if it does, I hope this thread results is some proposals for resolving it.
Do you have figures for this? They could be relevant for EA groups which focus on raising funds (like local groups with older members, or older people in their social networks).
Here is a popular article on the topic. The reason for fundraising from younger people would be that they may be note willing to donate to a new charity.
I agree, because people of all different demographics not only have different kinds of resources, but different opportunities.
Young people are more likely to have time to attend protests, and involve themselves in activism. It’s easier to schedule those events around a shift at your part-time job, or a single class you have that day, than a business meeting, crucial to your career, the sort of which will become more common in your thirties. There’s also a lower cost to social status if young people do things that look weird, and deviate from societal norms, which tend to be set by an older, possibly less active, generation. At a younger age, what individuals lack is soft power, and influence. After all, if they could change things without protest action, surely they would try do so more readily.
Of course, the older generation, between the ages of 30 and 50, is hardly inactive, really. They just focus on different things, because they have different motivations. This is a time when, keeping up appearances to stay ahead in the rat race, one cannot take as many personal risks without risking career capital. People of this demographic are also more likely to have a large(r) family of dependents, such as young children, and elderly parents. In addition to the high constraints all this may place on time and money, the fact that this is family may cause people to focus less upon distant others. This is understandable. People of this cohort are still altruistic, though. However, upon focusing on their career development, family, and community, they’re more likely to support causes that have been brought to their attention by their default circle of influence. They may fund donation drives at their children’s school, or at the workplace, they may volunteer locally, and they may count the community involvement they do for fun altruistic in its own right as well. Also, because they have so much to care for, people of prime working age may be minimizing extra costs, such as donated time or money, to hedge against unknown risks such as family emergencies, etc.
As people age, their children, and other family, are no longer dependent upon them. After retirement, people have more time. Even in the later parts of one’s career, before retirement, one may own a business, or be in a position for which one doesn’t need to put in extra effort beyond the requirements to stay afloat, or get ahead. Elders have more money to donate to political causes, but they also leave legacy gifts, endowments, and the like. This happens at a time when they’re more respected in society as elders, and may have more soft power anyway. Additionally, they either have more money than they’ll ever need, or they at least have greater confidence in how secure the money they have saved will support them for the remainder of their lives. Finally, they may think about supporting grander causes in a broader way than what they did with altruism earlier in their life, as they think about how they’ll want to be known, and remembered.
This model above is a hypothesis for a trend. Of course, there will be outliers. Mark Zuckerburg is influencing the world in ways many in their twenties couldn’t have dreamed of, while they’re still trying to get on the career track. On the converse, it would be crass of us to just assume the typical senior has money they’re sitting on that they could give away, when many pensioners themselves are in need. Indeed, I believe that increasing the healthspan of seniors could be considered one of the most effective altruism opportunities available, given the right research, and application of it. Anyway, this model seems simple enough to verify, or falsify, by checking the right sort of data.
What I’m trying to show is that it seems intuitively plausible that everyone has ideals, and everyone cares, but society drives people to care about different things in different ways over the course of their lives. To a parent, ‘ensuring my child grows up happy, healthy, and safe’ may be an ideal felt as strongly by a young activist yet without children pushing for some greater equity, or equality, in society. Cynicism of any particular generation doesn’t need to be rejected on the basis that it’s unattractive; rather, the cynical approach of older generations don’t care, or young people are too naive, may not explain as much as simply as this other model.
Ryan, if you think this hypothesis may hold weight, where could someone access the data to check? If not you, would someone at 80,000 Hours know? I feel as if that if we could figure this out, it could be not only an optimistic but also truer message that, e.g., Will MacAskill could put into his book. For people of any age, effective altruism will be non-conformist. If we can paint a positive picture that everyone cares, and wants to care, about the world, but just socially pushed in different directions, people new to effective altruism might realize that they can get off the hedonic treadmill anytime they like, rethink where they focus their resources for doing good, and find support in others.
I’m not sure. I think the data you want is who donates and volunteers more, and how does this relate to age and income? Maybe census data would help?
Yeah, that’s the sort of data I’m thinking of. Honestly, I’m not very thoughtful when it comes to these things. Thanks for the pointer.
There was a great thread in the facebook group on whether people making a modest wage (around or below $30k/yr in US terms) should be donating to effective charities or saving money. I’d like to weigh in on this but that thread is already pretty crowded and unstructured.
The proposition here is “People with average or below average income should save money rather than donate to effective charities”
One thing that it looks like almost nobody mentioned is the opportunity cost of worrying about other people over yourself and how this corresponds to effective altruistic output. It seemed from facebook that most EA’s were against the proposition, claiming that most people in the developed world are still far better off than X% of the global population and therefore they should still be donating some percentage of their wealth. I believe there is a strong case to be made that focusing on optimizing one’s own career capital, not just making smart personal finance decisions, will enable one to earn substantially higher income in the future and thus be a more “E” EA. Any intellectual power devoted to understanding the EA argument (doing the relevant research, picking an EA organization or EA-organization-recommended charity to donate to, and “stretching your EA muscles” by donating a small amount over a regular period of time) is a small investment in terms of money but a large investment in terms of intellectual capital that I think dedicated EA’s tend to discount because they have already invested this capital. This is a CFAR-esque argument that advocates focusing on personal development and improvement until one is at a level to reasonably maximize one’s own output both in terms of income and effectiveness of donations.
I am still uncertain about my position in this debate, but it seemed that most EA’s (at least on facebook) were strongly against the proposition so I would like to see more discussion taking the above points into consideration.
I was involved in the initial facebook thread on the topic. At the time, I made less than 30k, didn’t ever expect to make much more than $30k (I’m a nanny), and was highly turned off by the conversation.
Two cross-country moves later, I have actually doubled my income, but I still am highly turned off by elitist EA conversations that assume that all the readers are high-potential-earners in their 20s with strong social safety nets.
It would have been much easier to convince me to donate 10% of a $30k income, than to upend my life in order to make some kind of career change.
“I still am highly turned off by elitist EA conversations that assume that all the readers are high-potential-earners in their 20s with strong social safety nets.”
Sure, but I think your use of the term elitist is a bit unfair here. I personally know many friends that view my own identification as an EA itself elitist, because by trying to help with things like alleviating global poverty through targeted donation I am putting myself on a pedestal above people living in the developing world (or so the argument goes). To these friends, it’s less elitist to try and focus on pursuing their own happiness rather than thinking you can solve other people’s problems better than they can. Maybe this is why I am arguing for this angle of attack; I have friends that have different off-putting triggers as you.
I agree that we shouldn’t make broad generalizations about EA demographics, but at the same time we shouldn’t misrepresent them; I would wager that a large number, if not the majority, of prospective EAs would fall under the high-potential-earners-in-their-20′s demographic and this is very relevant in the discussion of how to advise people who are just getting into EA. I definitely agree that the same advice wouldn’t work equally well when addressing every person, and sometimes it’s correct to give two different people completely opposite advice. That being said, if I had to give 1 piece of advice in a generalizing way, I would want to consider the demographic of who I am giving this advice to rather than assuming that it is directed towards the median US citizen, for instance.
I think this is what Ryan is saying, but I want to say it again and say more, because I feel strongly and because Ryan left a lot of inferential distance in his post.
I dislike the idea that EA is mostly attractive or mostly applicable to it’s current dominant demographic of math/econ/tech interested people in their 20s. I think the core ideas of EA are compelling to a wide variety of people, and that EA can benefit from skills outside of it’s current mainstream. It seems likely to me that the current situation is more the result of network effects than that EA is not interesting to people outside of this cluster.
Catering our “general” advice to only one sort of person makes it more likely that other types of people will feel lost or unwelcome and not pursue their interest in EA; I take it Erica has felt this way. While the statement Alex made in his last paragraph is reasonable as stated, we are not in the position of only being able to give one piece of advice.
Do you have any idea how we might go about fixing the situation? It seems to me like math/econ/tech people in their 20s (including me) don’t know what it takes to make other demographics feel welcome. The best thing I can think of is encourage some people from other demographics to write about and actively discuss EA, and to spread the writings of people who already do.
That’s a really good question, and as another 20-something in tech, I also definitely don’t have all the answers. I have an in-progress draft of a post more generally on outreach, to be posted somewhere (not sure where), but I’ll briefly list some of my thoughts directly related to making a wider variety of people feel welcome.
Expand our models of EA dedication beyond earning to give. This model doesn’t fit most people well, but it’s by far the most prominent idea of what living an EA life looks like.
People want to see people like them in communities they’re part of (I don’t endorse this state of affairs, but I think it’s often true). This may seem discouraging, because it most obviously says “to get more of x type of people, you need to already have x type of people.” I think it’s not totally unactionable though—if cultural minorities make themselves more visible by posting an commenting on the forum, coming to meetups, etc., new people in the same cultural minorities will see them and know they are welcome.
Do your best not to assume that people are in your cluster. The career advice example is good. Another example is to explain math or econ jargon when you use them in a post. I think this has an outsize effect. The experience of being in a community but having the content aimed at different sorts of people is a little like going to a social dance and having no one ask you to be their partner—it’s hard to believe that you’re wanted, even when people keep telling you so. And it feels really crummy.
Note that I don’t know anyone who has said that they were interested in EA but felt unwelcome there. I think at least part of it is that EA is something that very few people outside of this cluster have even heard of, much less have taken steps towards getting involved in.
I definitely agree, and as a result I wouldn’t cater my advice to only one sort of person. I think it’s best to take an approach where you change the advice you give based on who you are talking to. Perhaps we should have some sort of portfolio of starting advice to give based on simple diagnostics. I’m sure 80,000 hours does something like this, so it’s not new ground. I think this is way better than saying “everybody should donate 10% of their income right now if you can afford it or you’re not a real EA.” And yes some people have said this. I find this to be a huge turn off personally.
ruthie: “It seems likely to me that the current situation is more the result of network effects than that EA is not interesting to people outside of this cluster.” I’m not sure I agree with this. I know surprisingly few people that are both actively altruistic and who actually think critically and examine evidence in their every day lives. I wish this was everyone, but realistically it’s not. I do believe there are a ton of people who would be interested EA that haven’t discovered it yet, but I think that the people who will ultimately be drawn in won’t be totally shunned by the fact that a lot of the info is catered to demographics that aren’t exactly like them. Especially since there is such a large range of socioeconomic status that a person could reside in, and each one might have a totally different EA approach that works best for them (and I’m not even talking about cause selection yet).
What if somebody has no interest in donating, but they are interested in career choice? Or interested in lifestyle change? Or interested in saving, researching, and donating later? Or interested in advocacy? Or interested in personal development? There are a lot of options, and I think telling everyone the blanket advice “just start donating now to GiveWell’s top charities and don’t worry about the meta stuff” will turn off many people in the same way that “focus on yourself until you have more income leverage” might turn people off. I haven’t seen any real evidence either way, just some armchair arguments and half-baked anecdotes, so I don’t understand why everyone is so confident in this.
Are you sure you’re sure? I don’t mean to nitpick, but unless someone from 80,000 Hours has shown, or told us, and they’re reading this, we don’t know for sure. I was going to write something to this affect, but your framing of the idea is even better, so 80,000 Hours should be addressed.
The thing about effective altruism is we don’t need preexisting status to have organizations pay attention to us. They pay attention to our merit, arguments, mettle, and records.
Although that can be self-perpetuating. For example, few would be willing to bite the bullet and say that they should give male-focused advice if 60% of effective altruists were male.
[tangent] Have you tried describing GiveDirectly to these friends, and if so how did they react?
I think that if the Standard EA Recommendation for middle- to low-income people is “come back when you make more money”, no middle- to low-income people (to a first approximation) will ever become interested in EA.
I think if I made 30k a year and asked someone what EA-related things I could do and they told me “you don’t make enough to worry about donating, try to optimize your income some more and then we’ll talk,” my reaction would be “Ack! I don’t want to upend my entire life! I just want to help some people! These guys are mean.” And then I would stop paying attention to effective altruism.
My general heuristic for stuff like this is that it’s more important for general recommendations to look reasonable than for them to be optimal (within reason). This is because by the time someone is wondering whether your policy is actually optimal, they care enough to be thinking like an effective altruist already, and are less likely to be scared off by a wrong answer than someone who’s evaluating the surface-reasonableness.
Agreed—and there are plenty of ways for people to contribute to EA besides donating. Writing articles, helping organize EA events, and offering support and encouragement to people who are working on more direct things are just the three first things that come to mind.
Any large group working on something needs both people working directly on things, and people who are in support roles and take care of the day-to-day needs of the organization. The notion that all EAs should be working directly on something (I’m counting earning-to-give as “working directly on something”, here) seems clearly wrong.
I think we can both agree that the way you say things is very important. Saying “come back when you make more money” is very different from saying “if you are interested in helping people as effectively as possible it may be wise to consider looking out for yourself first before turning your motivations outward.” There are a lot of reasons for people to worry that their lives are too good in comparison to others’ and therefore they have a moral obligation to help. I think a lot of EA’s have felt this way before. When faced with this sentiment, I think it can be a mistake with regard to actually being effective to devote significant effort into explicit donation rather than personal development.
I think you are also framing the argument to make “making more money” sound like a bad thing that most people don’t want to do. A lot of people already want to make more money, and they feel a conflict between trying their best to become successful VS using the resources / leverage they already have to help others. My argument is that focusing on personal goals and development could kill two birds with one stone for a lot of people and I don’t think it’s as off-putting as you make it sound.
Speaking from my own experience, I have a very high propensity to think about others before myself and I think this can be a flaw and limit productivity in many ways. I think I would ultimately be a more effective altruist if I had spent more of my time pondering “how can I become really good at something / develop valuable skills” rather than “how can I do the most good.”
True, but a lot of people are also struggling just to find a job that would be both enjoyable and provide a sufficient wage to pay the bills. Emphasizing making more money could cause them to feel a conflict between finding a job that doesn’t feel soul-crushing VS feeling guilty about being unable to donate much. (Full disclosure: I feel a bit of this, since the career path that I’m currently considering the most isn’t one that I’d expect to make a lot of money.)
“True, but a lot of people are also struggling just to find a job that would be both enjoyable and provide a sufficient wage to pay the bills.”
Agreed, so in that context, how does it make more sense to tell somebody that they should care about helping other people as much as they possibly can? I don’t see that train of thought getting through to many people in this situation.
I don’t want to tell anyone that they should care about helping as many people as possible. I want to tell them that they have a fantastic, exciting opportunity to help lots of people and have a big impact on the world, if they want to.
Someone who is struggling to find a meaningful job might also be someone who’s struggling to find some purpose for their life in general. (This has been true for me.) That might make them exceptionally receptive to a cause that does offer such a purpose.
Yes, this seems right. A lot of people could usefully contribute to effective altruism seem turned off by moralisation. And some effective altruists are demotivated by it. It’s generally pretty easy to make a point about how people can help without using the word ‘should’, ‘ought’ or ‘obligated’. I think it’s better to engage our intuitive and emotional mind with this talk of excitement.
A strong consideration in favour of donating on that sort of modest wage is that it gets you into the habit of doing so, rather keeping on putting it off until you’re richer. It also makes you better able to influence others to donate.
Also, an admittedly cursory look at Wikipedia suggests that the median adult income in the US is $24k/year, which’d suggest that telling these people not to donate would exclude half of all adults. (I expect that $24k/year is not the most relevant figure to use here, but the point stands that it’s easy to underestimate how wealthy we are even compared to others in the developed world.)
Yeah, this is a problem in gauging the majority opinions of effective altruists on anything. The best assessment of that for real will come out with the results of the 2014 effective altruism survey, which are being processed now. Even still, though, issues like this are still too new, specific, and narrow within effective altruism for a reliable record of consensus to be known. The issue is that the people who post, or comment, regularly in the Facebook group select themselves to be people who have fun discussing conundrums in giant forums. I am like this. Notice how I am commenting on everything, rather than spending my time earning more money, and then giving it away.
I would estimate that only 10% of effective altruists regularly discuss it on social media, and I don’t believe the few major perspectives put forward in any one discussion thread can be reliably thought of as representative of all positions effective altruists might take on that discussion. I believe to curb this issue is the reason, in part, this forum was started.
Context here (fixed)
I’m not sure if this is what Ryan meant to link to or not, but here’s the Facebook thread on donating on $30k/year that Alex refers to in his original comment.
I’m thinking about what kinds of material newcomers to EA should be exposed to. What are some of the basic conceptual tools that are useful for thinking about EA, and evaluating the effectiveness of different interventions/career paths/charities?
I’m thinking about stuff like:
Basic economic concepts: expected utility, opportunity cost, fungibility, various marginal concepts (e.g. marginal cost, marginal usefulness), diminishing returns.
Scientific concepts: control groups, randomized controlled trials.
Well-being-related concepts: quality-adjusted life years.
Heuristics and biases: scope neglect, motivated cognition, confirmation bias, affect heuristic.
What else?
These are good well-established concepts. I have some suggestions that are styled more as subjective advice that I would give to newcomers based on my experience.
System 1 and System 2 in applied rationality: people often have low motivation if their intuitions conflict with their analysis. If you’re pretty sure something is correct, it’s good to support it with emotional drivers like friendship or chocolate.
Signalling: if you’re trying to model why people do good, a lot of it can be explained by assuming that they are trying to make themselves good. It seems like it might be a major driver of charitable behaviour.
Pitfalls of expected value thinking: we’re not perfect reasoning machines, and we shouldn’t try to be. If you try to evaluate the expected value of everything, then you might spend too much time paralysed, or you might become untrustworthy. It’s important not to just act in a way that would make you look like an extreme effective altruist but to act in a way that will make you a good ally too. (cf signalling)
Virtue ethics: If you want to do good, it often pays to get into a habit of doing good. This means practising being nice, being generous, being successful, being thoughtful, being scholarly, being loyal, and so on. Not at the expense of thinking independently, but in order to be happily coexist with most of your neighbours most of the time.
Thanks, this is good advice.
One additional piece of advice that I might mention relating to these two points: it’s fine to act out of selfish motives. If you realize that you’re actually working on some altruistic project because you want to gain status, get social approval, make a good impression on people of your preferred sex(es) - great! If those motives cause you to work harder on worthwhile projects, then there’s no point in beating yourself up for being human and caring about yourself as well. Just be honest to yourself about your motives, whatever they are.
+1
Often we might tell ourselves that we’re doing something for an altruistic reason, while in our gut we don’t realize until later that we’re really doing something that looks like it could be for very selfish reasons. Also, human brains are enough of a kludge that we can act for different reasons. Cooperation is helping others in a way that also helps oneself: it’s selfish and altruistic.
Net present value—even some EAs do not understand this idea.
Statistics concepts—especially regression to the mean.
Inside view vs Outside view.
I had difficulty wrapping my head around the latter two when I first heard of them, and I am one who doesn’t understand the idea of ‘net present value’ (yet). I am a data point that Larks is right; please update in the direction of caring about these things more.
Me and some others are interesting in creating an Effective Altruism video that can serve as the go-to introductory video for EA. Does anyone know of examples of videos that we might want to emulate?
Also not answering your question but Joey and Xio dabbled in this project previously, and you will want to check out their effort.
I know this isn’t answering your question, but I think Mihai Badic was working on making a high-quality intro video at the EA summit and retreat earlier this year, so it would be worth talking to him.
What about something like this for introducing the idea of comparing causes?
What do effective altruists think about population ethics? I asked about this on Slate Star Codex, and got the impression that there’s too much disagreement for there to be an Official Position about this. I’m asking again here since I want to know what the general range of opinions on this is. Do you think that the number of future lives should be valued systematically, and if so, what sorts of future lives do you think we should:
Pay to add?
Be indifferent to adding?
Pay to prevent adding?
Nick Beckstead’s thesis “On the Overwhelming Importance of the Far Future” deals thoroughly with these questions from the perspective of Effective Altruism (albeit within the framework of a Philosophy PhD). See especially chapter 4.
http://tinyurl.com/BecksteadFuture
Working through the thought experiments he presents and seeing the different unintuitive consequences of each theory changed my mind: I had strong intuitions that creating extra happy lives had no moral value, but I’m now convinced that doesn’t make sense. I also agree with Ryan that the question becomes less about what is worth adding and what isn’t, and more about what we fundamentally value and whether that will be increased.
Reproduction can’t be morally neutral.
Imagine a thought experiment where you have to push exactly one of three buttons:
a—a person is created from thin air and tortured horribly for 1,000 years, then vanishes
b—nothing happens
c—a person is created from thin air and lives in unimaginably intense bliss and subjective freedom for 1,000 years, then vanishes
I can accept someone saying there should be no laws that mandate or ban reproduction for various practical and political reasons.
But I can’t take someone seriously who says it’s morally neutral which button you push in the thought experiment above.
This is an argument that I’ve previously made, but I can’t recall ever seeing anyone else ever make it. I wish you hadn’t deleted your account so I could see who you were!
It was Hedonic_Treader.
I was too lazy to specify that I was talking about the world as it is.
A couple might have a third (or first, or...) child, or they might not. I can accept that the two possibilities lead to slightly different total or average utilities, but as I said, I am not utilitarian on this point. I think we just allow people to choose how many children they have, and we build the rest of ethics around that.
I think in the world as it is, allowing people to choose how many children they have is exactly the utilitarian thing to do.
Of course, there are forms of persuasion other than coercion. Some ideas like liberal eugenics have world-improvement potential imo.
I’ll write more on this but for now, I’ll just state my beliefs without really explaining why I believe them:
Asking ’how should I feel about adding a life is the wrong kind of question to ask
The right question to ask is ’what is valuable in life, and will this be present in that person’s life
It’s not clear that we have a good concept of identity that persist over time but separate people
We should evaluate actions based on their impact on people’s lives, treating all people equally—present or future (or possibly past, depending on your moral system and philosophy of physics), local or far away, existent or not-yet-existent.
All population ethical systems have some unintuitive consequences. It’s a matter of picking the best. (or formulating a compromise)
If you try to make a bonus or a penalty for each life created, then you get weird results, and it becomes even less intuitive.
To the extent that we take a hard-line utilitarian view, we should judge the creation of lives purely based on the life that is created, without bonuses or penalties.
To me, the decision (freely made) to have children is morally neutral—I am not utilitarian on this topic.
Birth rates usually fall substantially as female education levels rise and women become more empowered generally. I would be happier about the world if countries that currently have high birth rates see those birth rates fall thanks to better education levels etc. The sort of drastic fall in birth rates seen in, e.g., South Korea and Iran, are caused by large society-wide changes, and I don’t think it’s likely that as an outside donor I can do anything to help bring about similar society-wide change in, e.g., Nigeria.
But improved access to contraceptives and family planning information help at least some couples choose to have fewer children, and that is something that I would plausibly donate towards. (I don’t know what sort of cost-per-unwanted-birth-averted figure I’d need to prefer a donation to, say, Marie Stopes over a donation to SCI, but it’s something I would carefully consider if I did see those figures.)
I can’t think of any realistic cases where I would pay for extra people to be born.
Bernadette Young wrote a great post on this decision (as made by individual parents) here.
For my part, I think that it’s healthy to have some parts of your life which you dedicate to doing what seems morally best, and some which you treat as personal, and that having kids should clearly be treated as personal (i.e. you shouldn’t agonise about whether it’s morally optimal). And I say that as someone who probably doesn’t want kids myself, a position that’s informed but not determined by ethical concerns.
Here’s my own opinions:
I think that there’s no essential difference between making someone in the future better off and changing who’s born in the future so that those who are born are better off. “Person A with good quality of life vs. Person A with poor quality of life” isn’t very different from ’Person A with good quality of life vs. Person B with poor quality of life”, because it shouldn’t matter whether the two lives are “different people” or not.
Given (1), causing a person with poor quality of life not to be born and independently causing a person with good quality of life to be born combine to give a very good outcome, so at least one of these changes should be very good, since the benefits of independent interventions should be additive. This doesn’t determine the benefit of any individual intervention, though. You could think that it’s not important for more good-quality-of-life people to be born and very important for fewer poor-quality-of-life people to be born, or the reverse, or something in between. I don’t think there’s very strong arguments for which one of those you should choose, but I would personally value future lives fairly highly.
“Pure” population interventions like increasing the availability of contraception or funding infertility treatments are not very cost-effective right now, compared to “indirect” interventions. Preventing fatal diseases increases population, while economic development decreases it, and both of these things have very good non-population effects as well. Your opinion on population ethics might affect your choice between the two, however. I personally favor health interventions since I value future lives relatively highly.
There should be more research on the costs of changing population size—currently, it’s hard to estimate how difficult it is to change population in location X by amount Y.
Effective altruists should think about population ethics more quantitatively. There are lots of statistics available about health, GDP, income inequality, population, etc., but I haven’t seen any discussion of tradeoffs between population and the other metrics (e.g. what % population change, either positive or negative, is required to compensate for a 5% reduction in per capita GDP or a 1 year decrease in life expectancy?). Even if you don’t care about the global population per se, the population size of individual countries is still important! If a country has life expectancy 10 years higher than average, adding one person to that country increases global life expectancy just as much as lengthening one person’s life by 10 years does.
I agree with these.
Is interesting. It’s the sort of thing that I think Robin Hanson or Tyler Cowen might have an opinion on, and one could easily ask them.
Excellent news.
A promising idea in macroeconomics is that of NGDP level targetting. Instead of targetting the inflation rate, the central bank would try to maintain a trend rate of total spending in the economy. Here’s Scott Sumner’s excelent paper making the case for NGDP level targetting. As economic policy suggestions go, it’s extremely popular among rationalists—I recall Eliezer endorsing it a while back.
At the moment we have real-time market-implied forecasts for a variety of things; commodity prices, interest rates and inflation. These inflation expectations acted as an early warning sign of the great recession. Unfortunately, at present there does not exist a market in NGDP futures, so it’s hard to get real-time information on how the economy as a whole is doing.
Fortunately, Scott Sumner is setting up a prediction market for NGDP targetting in New Zealand. A variety of work, including some by Robin Hanson, suggests that even quite small prediction markets can create much more accurate predictions than teams of experts. The market is in the early stages of creation, but if anyone was interested in supply technical skills or financial assistance*, this could potentially have a huge payoff. Even if you don’t want to contribute to the project, you could participate in the market when it launches, which would help improve liquidity and aid the quality of predictions.
A few EA types have already donated, and Scott quickly raised his initial target of $30k within a day or so, but it’s plausible that costs might be higher than expected.
Do we allow links in the open thread? If so, here’s a nice one by Bryan Caplan.
Ryan Carey is the only moderator, and there are maybe only 100 other people using this forum so far, so trying out something new is like a feature request that you can implement yourself. Also, forum norms here seem to be an even nicer version of norms on Less Wrong.
Also, for the record, I too think Dr. Caplan makes a nice point. There’s a Facebook group I shared the link with, called ‘the Economics of Doing Good’. It’s the discussion group for the economics by effective altruists, and it’s drawn a disproportionate number of economics scholars/enthusiasts. There’s some discussion of the link there, so if you’re on Facebook, join the group, and we’ll see you over there!
A poem, with an effective altruism bent that I have in my evernote. I don’t have attribution as to where I found it online first, but it is called The Exposed Nest, By Robert Frost
YOU were forever finding some new play.
So when I saw you down on hands and knees
In the meadow, busy with the new-cut hay,
Trying, I thought, to set it up on end,
I went to show you how to make it stay, 5 If that was your idea, against the breeze,
And, if you asked me, even help pretend
To make it root again and grow afresh.
But ’twas no make-believe with you to-day,
Nor was the grass itself your real concern, 10 Though I found your hand full of wilted fern,
Steel-bright June-grass, and blackening heads of clover.
’Twas a nest full of young birds on the ground
The cutter-bar had just gone champing over
(Miraculously without tasting flesh) 15 And left defenseless to the heat and light.
You wanted to restore them to their right
Of something interposed between their sight
And too much world at once—could means be found.
The way the nest-full every time we stirred 20 Stood up to us as to a mother-bird
Whose coming home has been too long deferred,
Made me ask would the mother-bird return
And care for them in such a change of scene
And might our meddling make her more afraid. 25 That was a thing we could not wait to learn.
We saw the risk we took in doing good,
But dared not spare to do the best we could
Though harm should come of it; so built the screen
You had begun, and gave them back their shade. 30 All this to prove we cared. Why is there then
No more to tell? We turned to other things.
I haven’t any memory—have you?—
Of ever coming to the place again
To see if the birds lived the first night through, 35 And so at last to learn to use their wings.
Edit—ok it is very hard to get poems to display right in the text editor
Hey everyone! I’ve been very busy the last couple of weeks, so I haven’t been able to make those open and special threads I mentioned in my post. Luckily, it seems as though others have taken it up. Thanks to Ryan Carey, Diego Caliero, Peter Hurford, Tom Ash, and Kaj Sotala for keeping things going. The importance of an online community is often underscored. This forum wouldn’t exist, for example, if there wasn’t Facebook to provide us with a place to rally as allies in the first place, and greater collaboration for reviews of strategies, and new collaborations between effective altruists, won’t take place as soon as they would without this forum.
Could giving good vegan food to poor people compete with other effective charity?
The idea: A charity develops or purchases nutrition-complete vegan food, ships it to areas of global poverty, and distributes it to the poorest for free.
The positive impact would be: Improved practical knowledge and demand to make nutrition-complete affordable vegan food, improved nutrition and purchasing power for the poor, reduced farm animal use without appeals to values or emotions toward animals (because the incentive is in-built into the wealth transfer).
It might be less effective in pure wealth transfer than, say, GiveDirectly, but the other upsides could make up for that.
I didn’t see a charity quite like this listed. (A Well-Fed World advocates an approach like this, but supports a diversity of groups and it’s not clear how homogenous and scalable they are.)
Would this have a drawback of disrupting local farming/food economies?
Could you explain the causal mechanism you have in mind? It seems that such a charity would increase overall demand for vegan food, because it is buying some, but reduce demand from everyone else, because some people who would otherwise have eaten vegan food will instead just take from this charity.
Also I think it would be good if you could outline some thoughts about possible disadvantages of such a charity.
The beneficiaries of this charity would eat more food than they otherwise would (assuming the charity targets really poor people), and more of it would be vegan, percentage-wise. Since using good, affordable and nutrition-complete food would be an explicit goal, donations would have the welcome side-effect of incentivizing such R&D, as well as scale economics.
I think this is a general argument for eating nutrition-fortified vegan products, but they tend to be high-cost organic life-style products, and I’d prefer to help make them ready for affordable mass consumption.
Opportunity costs if it’s not optimal. Then there’s the argument from wild-animal suffering and ecosystem displacement, that is, that meat consumption helps destroy the environment faster and therefore prevents more animals from suffering. And speculative acceptance problems or PR backlash if, say, the food isn’t healthy. Also, you would need to identify people who really benefit from free food and bring it to them in a cost-effective way (I figure malaria nets don’t spoil as fast as food does.)
Perhaps there could be a niche for this in disaster aid or acute famine relief.
There’s also a charity called Vegfam—http://www.vegfamcharity.org.uk/ - not sure how effective they are though.
EDIT: struck from the record. (I don’t know whether “retract” does what I want or not so I’m doing this instead.)
To delete a comment forever, first retract it, then click on the permalink, then click on the ‘delete’ button. Note: this applies to comments that haven’t been replied to; comments with replies can be retracted, but not deleted. (In this case, you wouldn’t have been able to delete the comment, since Ryan had already replied to it when you decided to strike it from the record.)
Hi Ben—thanks for the feature request. Anton Geraschenko, who founded Math Overflow, advised me that it’s important to keep object-level and meta-discussion separate so that the latter doesn’t take focus away from the former. So can you please put this discussion here: Improvements to the EA Forum.