Open Thread
Welcome to the first open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
Welcome to the first open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.
I’m still fuzzy on the relationship between the EA Facebook group and the EA forum. Are we supposed to move most or all the discussion that was going on in the FB group here? Will the FB be shut down, and if not what will is be used for?
I think the format of the forum will present a higher barrier to low-key discussion than the FB group, e.g. I’d guess people are much less likely to post an EA related new article if they don’t have too much to add to it. This is primarily because the forum looks like a blog. Is FB style posting encouraged?
If this has all been described somewhere. Could someone point me toward it?
Also, what’s the relationship between the EA forum and the EA hub? http://effectivealtruismhub.com/
Facebook is a terrible medium for discussion, so I hope everyone, or at least all the cool people, come over here and we have an active community. I don’t know if this will happen. I think this forum would be a good place for links with discussion and not just blog posts.
There’s a long comment on this topic by Ryan Carey on the FB group here—basically the policy is carefully curated content of a pretty generalist nature for the first month, so that the forum will be as inclusive as possible to begin with. Then after that first month it gets thrown relatively open.
I also imagine this forum as a place of links and rapid-fire discussion in addition to the longer stuff, and in a few weeks time we’ll get to see if that mode of posting becomes popular.
I concur with pappabuhry. Additionally, our minds might be running the schema that because this forum looks like a bit like Less Wrong, and has an almost identical platform, maybe it makes sense to use it like Less Wrong.
However, Less Wrong is part of an online rationalist culture that is distinct from effective altruism. Additionally, Less Wrong launched with traditions of discussion that had been on Overcoming Bias for years already. While lots of users here originated on Less Wrong, I believe they’re the ones who are open to discussions with a a different feel. So, how this forum as a whole prefers discussions will form as individual users try new discussions at any level, and what establishes itself.
I believe the policy for the first couple of months of being open and inclusive with subtle moderation is a good approach for cultivating an agora.
I’ve chatted to Ryan about this, and the idea is that the forum is the place for people’s writings on and discussion of EA, whereas the projects on http://effectivealtruismhub.com/ are for other things. For instance the EA Profiles are the place for information about people—e.g. showing more about who the people writing here are, and (we plan) linking to those writings. So in that sense they should be nicely complementary.
I thought the EA Facebook group was going to play “LW Discussion” to the EA Forum’s “LW Main”. Though the open thread does blur that line.
There’s also an EA Reddit for posting articles.
I think the problem with that is the platform here is much better than FB. I think it would be better to have both “main” and “discussion” on this site.
One possible side benefit is that it can include the (few) people who don’t have FB. I know at least one person who does not have FB, wants to use the EA group but doesn’t want to get addicted to FB.
Hey Jess. Good questions. Obviously, the relationship between these is mostly decided by the community, rather than by one individual, and will emerge gradually over some number of weeks.
That said, I think it’s good for most substantive discussion to move here. Here should also have some blog-length posts that are lighter and fun to read.
Since most people are using the same names on Facebook as here, there are some advantages to keeping it open. It’s a kind of bridge between internet and real world. It helps people to put faces to the names of people they’re interacting with, which should increase willingness to meet or collaborate. As for what goes there, I think the stuff that goes there will include:
some links (e.g. Elon Musk made a bunch more dough of this nasa deal)
practical real-world stuff will go there, (e.g. “I’m going to X city, does anyone have a room to offer there”)
specific topics (similar to the Open Threads. There should be enough minor EA discussion to go around)
I’m kicking around a rough guidline in my head. Somethnig like “post it to the forum if it’s at least three of ‘fun to read’, ‘substantial’, ‘relevant’ and ‘reasoned’. If it’s two of those things, then an open thread or facebook is more suitable. If it’s only one of those things, then it’s no good.
Tom and I are thinking of ways to tie-in with the Hub. I think that the Hub could use the Forum to run a survey, whereas the Forum could use the Hub’s map to identify people who might want to attend a meetup.
Feedback helps, especially on the FB/Forum border. Anyway, I’ll bundle these thoughts into my next update post.
Thanks for info Ryan. A couple of points:
(1) I don’t think minor posts like “Here’s an interesting article. Anyone have thoughts?” fit very well in the open thread. The open threads are kind of unruly, and it’s hard to find anything in there. In particular, it’s not clear when something new has been added.
One possibility is to create a second tier of posts which do not appear on the main page unless you specifically select it. Call it “minor posts” or “status updates” or whatever. (Didn’t LessWrong have something like this?) These would have essentially no barrier to entry and could consist of single link. However, the threaded comment sections would be a lot more useful than FB.
This is similar to Peter_Hurford and MichaelDickens and SoerenMind comments above.
(2) I’ve talked to at least a couple of other people who think EAs need a place to talk that’s more casual in the specific sense that comments aren’t saved for all eternity on the internet. (Or, at the very least, aren’t indexed by search engines.) Right now there is a significant friction associated with the fact that each time you click submit you have to make sure you’re comfortable with your name being attached to your comment forever.
It might make sense to combine (1) and (2) (or not).
I agree that the links might not fit well in an open thread. An alternative might be to bundle up a bunch of links into a “links for November” type thread like State Star Codex. Then, people can put more links in the comments if appropriate.
However, learn against improving discussion by subdividing discussion fora. The main/discussion distinction was one of LessWrong’s most unpopular features. In the effective altruism community, we already have a subreddit, many facebook groups, many personal blogs, many Twitters, many Tumblrs, LessWrong, here and many other online locations. Moreover, given limited programmer resources, we’re not currently looking for new features. Having said that, I’ll look into the feasibility highlighting new comments because that seems like it would be useful.
A private Facebook group is best for this. There’s no straightforward way to prevent public pages from being indexed by sites like archive.today.
Very reasonable. Thanks Ryan.
I feel like a lot of potential is lost if we don’t encourage asking questions and making smaller contributions (like on fb and the open thread) on the forum. I do understand that these kinds of posts don’t fit into the main section of the forum. But what’s the reasoning behind not having any subforums? I often think of issues I would post in subforums of this site, which I wouldn’t bring up facebook (because 100s or 1000s will read it) and that doesn’t fit into Main.
An open thread is a nice step in the right direction. It does have significant disadvantages to subforum(s) though in my estimation:
No headlines for posts, so it’s not scanable
You have to see the full post rather than the headline only
It’s not that visible and the headline “open thread” doesn’t really intrigue me as much as other posts.
Also, I feel like topic-specific subforums would generally lower the barrier for people to post something. I guess I have this intuition because the posts won’t be seen by (as many) people who are not interested in your post’s topic.
By now I’ve read Ryan’s comment on subforums (https://www.facebook.com/groups/effective.altruists/permalink/743662675690092/?comment_id=744027525653607&offset=0&total_comments=14). In my estimation the lost potential outweighs the costs, so consider this a vote for subforums (or at least main/discussion). I’m happy to be convinced otherwise though.
[We Could Encourage Headlines in Open Threads]
Like above. Jacy does it.
[Encouragement and qualification ;)]
Good suggestion! This would solve the headline issue, though the issue that you don’t have a list of simply headlines in front of you would still be unsolved.
Perhaps there’s an easy way of having the headlines of all the comments appear in the OP of the open thread? The person that opens the thread could manually add them of course, but that involves work.
I’d also prefer subforums, but split by subject matter rather than level of depth.
I see the main value of them in making it easier for people to navigate old discussions when they’re interested in specific topics (I found this a big problem with LessWrong). Since I think they’re mostly an indexing tool, I’m not sure how this would have a significant effect on critical mass—though I could be wrong on that.
We could also make better use of tags.
Yes, if there were a reasonably small set of tags used consistently, and a way to navigate by them, that would work just as well.
The subforums approach essentially forces that indexing work up-front at the time of posting. Otherwise it’s pretty similar.
This seems like a good idea. I second the point that they could be easy to navigate. Which means it may become similar to subforum s, but with the advantage that there’s still a place where all posts are listed.
Does anyone know a site that has implemented this in a useful way?
EDIT: To qualify, there’s lots of sites using tags of course. I’m referring a system where you have a ‘reasonably small set of tags’ as Owen suggests and those are very visible. E.g. they are displayed on the main page as the ‘official tags’
I think that would be a good feature. Current tags aren’t very useful.
Hey Soeren, I agree that retaining small contributions is an important challenge. I also agree that the open threads as they currently stand, probably don’t fully meet that challenge. Since the first open thread was so popular, we could break pieces off it by 1) putting the meatier posts as articles instead of open thread comment or 2) having topic-specific open threads e.g. “Career advice thread”, “Far future discussion thread”. I think it’ sgood to keep thinking about this.
Regarding subforums, I’ve written more here.
What are some examples of things that could have been popular EA causes, but weren’t, for reasons that are not completely obvious (and may have to do with historical contingency)?
One example I can think of is anti-aging. This is a cause that has a lot of traction in circles that have overlap with EA circles (rationalist, transhumanist, singularitarian, etc.). However, for whatever reason, it hasn’t been identified with EA. If you think anti-aging sounds too outlandish, it’s worth noting that with the exception of poverty reduction, the current popular EA cause categories (AI/ex-risk reduction and veganism/animal activism) both seem outlandish.
Another area where EA focus hasn’t historically been great, but is gradually increasing, is changing or working around bad policy, in areas such as migration, drug policy, international trade, etc. Lots of economist-types are attracted to EA, so it’s interesting that the policy arena has been relatively neglected until recently.
Another example, though not as good, could be effective environmentalism. It’s a classic cause among altruists and looks like an x-risk.
Katja Grace (of Meteuphoric) also did some research for Giving What We Can looking into climate change charities. She wrote up her findings as a blog post.
I think the lack of EA work on bad policy comes largely from the heavy competition in the area. To an extent, improving policy is zero-sum in that there are lots of people who are actively working against any particular policy (some good policies receive minimal opposition, but there probably aren’t many of these). Whereas if you, say, donate money to AMF, few people will try to stop you. Even those who disagree with your decision won’t actually prevent you from giving the money and won’t prevent AMF from distributing the bednets.
At the 2014 Effective Altruism Summit, each of Geoff Anders, Peter Thiel, and Holden Karnofsky identified three heuristic criteria for effective altruists to use in selecting a cause area:
neglected
valuable
tractable
I. Why Does Effective Altruism Neglect Ant-Aging?
Anti-aging doesn’t seem very tractable on its face, but neither does existential risk reduction. Despite both being causes within emphasized by rationalists and transhumanists, anti-aging has been left outside of effective altruism thus far. I believe this is because the rationalist community as a precursor to effective altruism better coordinated their concern over existential risk better than their concern over anti-aging efforts.
Like, through Less Wrong, and the Machine Intelligence Research Institute, (almost) every existential risk reduction organization got in touch with another. This formed a solid voice advocating for this cause when effective altruism started. On the other hand, Aubrey de Grey and his organization, SENS, seem like the only one(s) in contact with effective altruism, while the rest of the major anti-aging advocates run their organizations out of touch with us, and each other.
Another thing about ant-aging is that while it on its own may seem like a worthy intervention, it often gets lumped with cryonics, and other transhumanist technologies, that seem even less tractable than anti-aging research. That is, those aspects are frequently dismissed by rationalist, let alone effective altruists. So, if the most vocal advocates for anti-aging research only communicate that signal with a bunch of noise, effective altruists may be less likely to consider it.
This seems like a historical contingency to me, based on how the rationalist community organized itself with some circles but not others. This makes possible but by no means definite that the rationalist community has not emphasized anti-aging enough within effective altruism, relative to existential risk reduction.
II. Why Does Effective Altruism Neglect (Better) Policy Advocacy?
This also seems to be due in part to historical contingency. First of all, there is the wariness among the rationalist community that delving into the trenches of politics will be much less tractable than aiding the world through other means. I believe I mildly perceive the same strain of thinking as an undercurrent of utilitarians such as Toby Ord, or Peter Singer.
Also, Givewell thought it much more difficult to assess policy advocacy when measuring impact qualifications for it was much more difficult. In other words, Givewell wanted to cut their teeth, and gain experience, in an area more measurable than policy advocacy. Before, like other charity evaluators they were giving recommendations to individual donors. Now, with Good Ventures, they’re giving recommendations for foundations, with much more money.
In conversation with my friend Joey Savoie a few weeks ago, we discussed maybe Givewell is exploring policy advocacy through the Open Philanthropy Project now because noticeable gains in policy change can only be affected with large investments, and it’s only now with Good Ventures that Givewell has an ally with sufficient weight to get that happening.
My opinion was that anti-aging and existential risk seem roughly equally neglected and roughly equally tractable, but existential risk seems a whole lot more valuable, so hence the focus on that instead.
I concur. This explanation works for why precursor movements to effective altruism such as the rationalist community would have emphasized existential risk over anti-ageing research as well.
I’m not sure that this is necessarily the case among EA orgs with full-time staff. The Centre for Effective Altruism (in particular the Global Priorities Project, which is our collaboration with FHI), The Open Philanthropy Project and the Cambridge Centre on Existential Risk are putting considerable effort into policy work. For example, I and others at CEA put the majority of our time over the past week into policy research, and our trustees were at a meeting at No. 10 Downing Street yesterday. I have written up some of my thoughts on our early policy work at http://effective-altruism.com/ea/7e/good_policy_ideas_that_wont_happen_yet/
I think that there are a few effects going on here which cause policy to appear under-neglected among the community at large...
There is a relatively larger barrier to entry in policy work (compared to e.g. making a donation to a GiveWell recommendation), which means that policy work is often done by people working in this area full-time, or who have past experience in the area. This may be one of the reasons why the community at large isn’t doing more policy analysis. I think it would be useful if the EA community did do more policy analysis, in particular making recommendations of policies that could feasibly happen (i.e. tweak this thing, not ban agriculture subsidies) and doing analyses of the type I outline in my post above (e.g. what are the benefits, what are the costs, who will be in favour, who will be against, how can we change the policy to make it more feasible while retaining most of the benefits, how would we actually make this change, and who do we ultimately need to convince about this to make it happen, etc.). I for one would find this useful in informing the work that I do in this area, and if the ideas are good enough they would likely be taken forwards.
Policy work is often under-publicised unless there are major breakthroughs. In doing this work we are developing ongoing relationships with people, and if we were to publicise these relationships on the internet we could damage them. For this reason we often find it difficult to talk about our policy work extensively in public.
There may also be cultural and path-dependent effects at play here, which people have mentioned above/below and elsewhere, so I won’t go into them in detail.
Effective altruist organizations with full-time staff definitely aren’t neglecting policy advocacy. I meant the broader community at large, in the sense that for the last two years it’s been focusing upon: reducing global poverty and illness; animal advocacy; reducing existential risk.
How can the rest of us help?
This “three factor model” of cause assessment has been used by 80,000 Hours for a long time (they use the terms ‘crowdedness’, ‘importance’ and ‘tractability’). Do we know where it originated?
I think it originated with GiveWell—they used something like this framework for assessing cause areas, which 80k then based their framework on. It’s possible I’m misremembering this though.
Yeah I concur that GiveWell started it.
GiveWell has used this “three factor model” as well (they also use the terms ‘crowdedness,’ ‘importance,’ and ‘tractability’). I’m not sure about the dates when either organization started using this model, and I wouldn’t be at all surprised if people started using it independently, since it’s rather intuitive.
It makes sense, but I didn’t know about the model before the Effective Altruism Summit. Having it crystallized is great, and everyone should know about it, so I want to write a post about it for this forum.
This comment is a reminder to myself to write it.
If anyone wants to help me write it, or give feedback, please send me a private message.
I’m surprised in some sense that there hasn’t been more discussion about religion (moreso Eastern religions) in EA, and spreading/working with those religions as an EA cause. Also, psychology and spreading ideas for raising the hedonic treadmill of human populations.
Although I think the causes EA landed on are pretty independent of the historic details, since many independent groups came up with the same causes and they fit objective cause-finding heuristics (e.g. targeting large, marginalized populations).
Anti-aging has come up several times over the years, and it’s never seemed promising enough to warrant further consideration (Edit: on the scale of the other big EA causes). I’m not really sure where the altruism lies here, since most people don’t see death as inherently bad phenomenon. (Edit: I now see that anti-aging research is also focused on increasing quality-of-life rather than just increasing lifespan, so I see the altruism!)
Policy discussion has been around for a while. I think it’s more-so that we’re now reaching the size where we can more easily affect it, which is a good reason to start discussing it more.
Why Have Effective Altruists Neglected Interfacing With Religion?
What’s great about religion is that it’s full of the altruism, but it’s unknown how religion considers effectiveness. I by no means mean that religions are inherently opposed to effectiveness, or even indifferent to it. Religion has played a fundamental role in institutionalizing the very idea of charity. Religiously oriented charities individually run the gamut from effective to ineffective, and I believe that’s more due to the individual organization rather than the religion in question itself. Indeed, it’s the history of trying new charitable interventions inspired by religion that has given humanity a starting place to think about the effectiveness of altruism in the first place.
Historically, religion has implemented altruism without always thinking in terms of effectiveness, because the body of strategies composing effectiveness as an idea was abstract and unknown. It seems without its frame of mind, effective altruism is unsure how to interface with religion.
Another issue is religions tend to be deontological in nature, and while effective altruism accepts deontologists, it wasn’t designed with deontology as the primary framework in mind. So, bridging the epistemic gap between religion and effective altruism may thus be more difficult.
Anyway, I think we could make open calls to effective altruists who:
are religious
were once religious, and feel effective altruism is congruent with their religiously motivated altruism
have received positive feedback from religious folks.
so effective altruism knows better how to reach out to religion.
I am a committed Christian also committed to the principles of effective altruism. I am very frustrated with the level of apathy in the church, given that we are all called to tithe 10% of our income, like the rest of the population Christians have really lost sight of how rich they are now. I am also frustrated by the focus on differences between religions, and between religion and the non religious, when common values of love and concern for our planet giving how utterly amazing it is we are here should prevail. Altruism is at the heart of Christianity and of course it should be effective. I would be happy to work with other EAs in develop an outreach/link strategy into churches.
My wife is head of fundraising for a charity that is like a mini version of Christian aid—donating to poverty alleviating projects in a Christian context. Making this more effective would be a good place to start.
1 billion Christians should be able to make a real dent in the problems of the world if they focussed less on the coffee rota and more on what our faith actually calls us to do.
That’s great, David, and you’re the sort of person I mentioned above. Extending love, compassion, and understanding is a cornerstone of all altruism. I don’t have anything to add now, but I’ll contact you in the future if I broach this topic again.
You might be interested in this chapter on global poverty, utilitarianism, Christian ethics and Peter Singer that I wrote for a Cambridge University Press volume.
http://www.amirrorclear.net/academic/papers/global-poverty.pdf
I would love to see some action in this space. I think there is a natural harmony between what is best in Christianity—especially regarding helping the global poor—and effective altruism.
One person to consider speaking with is Charlie Camosy, who has worked with Peter Singer in the past (see info here). A couple other people to consider talking with would be Catriona Mackay and Alex Foster.
David, which sort of material you think could be persuasive to the higher ecclesiastical orders so that their charity was more focused on Givewell recommended charities and similar sort of evidence based, calculation based giving?
How can we get priests to talk about the child in the pond to the faithful, in a scaleable and tractable manner?
As a result of your faith, are you only interested in working on global poverty, and not x-risk or speciesism?
(It’s great to have you and people like you around; I don’t mean to sound judgemental.)
Religion also often encourages (or is used to defend) speciesism, and it also leads many people to not believe in x-risk. As such, religious EAs are mostly only relevant to 2⁄3 of the major cause areas of EA. Given that I think global poverty is by far the least important of these cause areas, convincing religious people to care about EA doesn’t seem to have very high value to me.
X-risk and animal welfare are still pretty marginalized across the entire population, not just among the religious—and Christians have a very convenient existing infrastructure for collecting money. It might be that there are other reasons not to worry too much about them (e.g. an unmovable hierarchy that controls where the money goes), but their lack of concern for some (or even most) EA target causes doesn’t seem like it should bear much weight.
I think you overlook the strongest argument for spreading religion, namely that by converting people to the One True Faith, we could save them from eternal damnation. As eternity is a long time and damnation is very bad, this would be extremely high QALYs.
Most EAs think all religions are false, so do not subscribe to this argument. However, I do not think religious EAs can avoid this so easily. If you are a Christian EA you should probably try much harder to convert people to Christianity.
My above comment was reasons why effective altruists have found difficulty in reaching out to religion, even though it’s important because much altruism in the world is religiously motivated. However, I understand your point. If someone’s greatest priority is getting others into Heaven by converting them to their own religion, that may be a confounding factor for getting them to do other things.
However, this isn’t the case for all religious people.
How much a religious adherent is supposed to proselytize varies among sects within religions.
From what I know of major world religions, such as Islam, and Christianity, charity is emphasized as an important virtue to act upon independently of, and in addition to, converting others. The moral imperative for charity in religion tends to extend beyond helping only the less fortunate of one’s own religion.
Humans tend to signal their association with an ideology by committing to the goals set out for its adherents. When the goal seems far away, it’s easy for people to promise to achieve it. When the goal is very nearby, its difficulty becomes more apparent, and more people will shirk it. This is called construal-level theory. If you accept that model, I believe it extend to religious conversion. (Some) religious leaders will call upon their followers to convert the unbelieving, yet everyday those same followers fail to confront their neighbors, friends, families, and colleagues from believing differently. As the world becomes more globally interdependent, lots will realize the value in helping and cooperating with groups of outsiders, and their unfortunate.
Religion blends with other cultural forces in people’s lives to cause a diverse array of how they practice, and that still allows effective altruism for millions of religious folks.
Yes, I basically agree. But I think you have slightly misunderstand my argument. Many religions say both
1) You should convert people 2) You should help people
Obviously not all religions say these (for example Judaism is not very evangelical). My argument isn’t that religious people should proletize because of 2). My argument is that, given religious people’s other beliefs about heaven and hell, they should proletize because that is the most effective way of helping people. Even if their religion included no evangelical commandments, they should try to convert people as the most effective way of loving their neighbors. A secular EA charity might try to persuade people in the less economically developed countries to purify their water; a religious EA charity might try to persuade people in the developing world to say their hail marys.
What does that mean? Anti-aging medication or technology (if it worked) would generate returns not just to the people who engage in the research, but to many others. In that respect, I don’t see how it’s qualitatively different from trying to prevent or cure malaria, AIDS, or NTDs. (Quantitative differences arise mostly from cost-effectiveness and feasibility).
I’m not seeing what those returns are other than (i) preventing death, and (ii) improving quality-of-life (which also happens in other medical research). It seems like the value-added is (i), which is why I made my original statement.
So, in a way, curing malaria is anti-aging research. If you extend anti-aging research to any research to prevent illnesses with high mortality, then sure. But why give it a new name then? Is it just a broader term than “disease prevention”?
Anti-aging focuses specifically on extending the “healthspan” of people (starting in the developed world, presumably) past the current point where age-related degenerative diseases start to eat into your QALYs.
It’s different from disease prevention because it operates higher than at the level of individual diseases, hoping to solve the underlying reasons why age is so strongly contributory to those diseases.
Anti-aging also tends to have absurdly high RFMF compared to most disease research, since it’s a “weird” idea that most people don’t like.
It also seems high impact: it would serve as a multiplier for any earlier-in-life health improvements as well as allowing some high-skill people to keep contributing to society (e.g., researchers who have accumulated lots of valuable knowledge and experience).
General anti-aging research is arguably more effective than trying to cure any single disease, because once your body begins to decline due to advanced age, that makes you more susceptible to basically all diseases and injuries. Successful anti-aging treatments would thus act as a general, massive health boost to everyone past a certain age.
When I speak of effectiveness, I generally mean, “For X amount of effort, how much good would this path lead to?” I agree that successfully giving a general, massive health boost would probably be better than curing a single disease, but I worry that the broader approach has less impact-per-effort.
I discussed estimating the cost-effectiveness of anti-ageing research at the very end of my talk at the Good Done Right conference. There’s actually an error in my formula there, so it should be 20 times less effective than I said. I don’t think we should have deep confidence in the answer coming out, but I do think it’s suggestive enough that it may be worth more careful estimation.
Given it’s history, I’m not surprised that the EA movement is currently primarily non-religious people. But I am surprised that no one has tried talking to/at churches, which could be very useful, if it worked. I would guess that some denominations would be more open to it than others.
Interesting:
And a summary of an article behind a paywall that
I’m not surprised by that at all. Most EAs are non-religious or at least weakly religious, and spreading religion is a non-intuitive idea for non-religious people. Even for religious people, it doesn’t make much sense to spread other religions.
“Spreading ideas for raising the hedonic treadmill of human populations” seems like part of the Hedonistic Imperative, as espoused by David Pearce.
If so, it seems to me like its an idea leaning towards transhumanism that doesn’t seem tractable in the present, with the requisite technology to achieve such goals coming at some unknown future point (at best), and with lots of people unable to grasp how where we start researching or advocating for such an abstract cause.
I personally have sympathy for its sentiment, but this is just my hypothesis for the conclusion skeptical effective altruists reach.
It seems obvious there hasn’t been (much) discussion of Eastern religions because effective altruism is primarily represented in English-speaking and Western(ized) countries, which are traditionally Abrahamic in religion. I know that Benjamin Todd of 80,000 Hours has personally read much Taoism, and of Chinese culture, so asking him for perspective may prove worthwhile.
A really superb book I have never seen highlighted by EAs is Human Purpose and Transhuman Potential:A Cosmic Vision for Our Future Evolution by Ted Chu, PhD. He has been the chief economist of GM amongst other things so he is very smart and well grounded, and it’s an amazing intellectual feat. I raise it as it covers all of the religious wisdom across all faiths within the book—I found it hugely thought provoking.
Does believing in or identifying as an EA involve a fair amount of hubris and arrogance? To be an EA, and make EA-based decisions, you have to essentially believe that you have some insight into the best way to use resources to make the world a better place. The type of question whose answers EA demands are extremely difficult. When EAs think they have all, or even some, of the answers on how to go about EA, how much arrogance does it reflect? Would something like EA attract overly cocky people?
Personally, I believe I know a whole community which collectively knows how to use resources (much) better than average, and at a larger/leveraged scale, if not for the whole world. This is because I’m concerned about (apparent) hubris and arrogance within effective altruism, but I identify as one regardless.
Most people believe they know better than others, even if they don’t claim to know better than everyone else.
I believe what may be overlooked in the original question is the base rate of people who believe they’re doing the best work. That is, humans have at least somewhat of a natural tendency towards privileging their own ideas for doing good as the best. Since effectiveness isn’t emphasized in most altruism, I believe most people don’t think their own approaches are ‘literally the best’, so much as naively thinking they’re just ‘very very good’.
If one individual is the only one to express their disagreement with some form of charity, especially if it’s represented by a big movement, they’re likely to be lambasted ad hominem. So, there’s a disincentive for publicly criticizing charity, even for correct opinions widely held in secret.
Promoting doing ‘the most good’ requires courage but has drawbacks.
In an civil and media environment when criticizing charitable endeavors is controversial, claiming to (try to) do the best may be taboo. I believe effective altruists have thought about what to do more than most others and, I don’t believe we’re missing out on some hidden heuristic for philanthropy that works really well for everyone else. People identify with the good they do, so they may get defensive. When pressed, we admit we don’t claim absolute confidence in our evaluations, that we seek to change our own minds, and that we’re merely doing the best we can.
Putting up the stronger front of advocacy that is trying to literally be the best comes at the cost of (apparently) having hubris. I believe this presents an image problem which will need to constantly be mitigated, and also a real problem we must constantly protect against.
Effective altruism seems to have good defense mechanisms against attracting arrogance.
I believe some effective altruists will be afflicted with over-confidence, and hubris. However, continually normalizing critical thinking, openness to (self-)criticism, and the proper use of humility mitigates this. In particular, being a self-critical movement includes effective altruists criticizing ideas of other effective altruists as they’re newly presented. I’ve observed even on Facebook effective altruists of all stripes are quick to neutralize cocky people, who tend to bring arguments that aren’t as well thought-out.
There seem to be two questions here:
(1) Does believing in or identifying as EA require having a certain amount of hubris and arrogance?
(2) Is EA more likely to attract arrogant people than more modest people?
I think the answer to (1) is clearly no—you can believe that you should try to work out what the best way to use resources is, without thinking you are necessarily better than other people at doing it—it’s just that other people aren’t thinking about it. My impression is a lot of EAs are like this—they don’t think they’re in a better position to figure out the most effective ways of doing good than others, but given that most other people aren’t thinking about this, they may as well try.
I’m less sure about (2), and it depends what the comparison is—are we asking, “Is the average person who is attracted to EA more likely to be arrogant than the average person who is interested in altruism in a broader sense?”. It seems plausible that of all the people who are interested in altruism, those who are more arrogant are more likely to be drawn to effective altruism than other forms of altruism. But I’m not sure that EAs are on the whole more arrogant than people who promote other altruistic cause areas—in a way, EAs seem less arrogant to me because they are more willing to accept that they might be wrong, and less dogmatic in asserting that their specific cause is the most important one.
There’s a third question which I think is also important: is EA more likely to be perceived as arrogant from the outside than other similar social movements or specific causes? I think here there is a risk—stating that you are trying to figure out the best thing can certainly sound arrogant to someone else (even though, as I said above, it actually seems less arrogant to me than being dogmatic about a specific cause!) So maybe it’s important for us to think about how to present EA in ways that doesn’t come across as arrogant. One idea would be to talk more about ourselves as “aspiring” effective altruists that as simply effective altruists—we’re not trying to claim that we’re better at altruism than everyone else really, but rather that we are trying to figure out what the best way is.
I don’t think you have to know the best way to use resources to make the better place. All you have to do is want to know it, try to figure it out, and act on your best guess.
Make an extra effort to upvote people who make good contributions. There are a lot of people who are below the karma threshold for posting (including myself).
Upvoted for visibility
Legibility is tricky. I want to be able to easily explain my giving, so that when people ask for details on what I mean by “I give half” we don’t get into complex arguments about what counts. For example, if my work has a donation matching program, does that count? What if I do work for someone and ask them to donate instead of paying me? What about money my company puts into my 401k? Luckily the US government already has figured out a set of rules for this, so I can use them. When people want details on how I account for things, I can say “income” is “income on form 1040” and “donations” is “gifts to charity on form 1040 Schedule A”.
That’s really useful, thanks Jeff. I feel like I get into these kinds of discussions a lot (when discussing GWWC). I often find it frustrating, because it just doesn’t seem like what we should be focusing on. But at the same time I can see it being important for people to really know that the pledge has real substantive content, and that people stick to it.
Yes, being able to say “I’m part of a group who’s pledged to give 10% of their money to global poverty charities” is really clear, which makes it more approachable to say—partly because you don’t have to get into a complex explanation. 10% is a clear total similar to tithing, and the concept of and case for giving to global poverty charities is pretty well known.
I’d like to see a discussion on thick versus thin EA, similar to the discussions online of thick versus thin libertarianism, such as http://radgeek.com/gt/2008/10/03/libertarianism_through/
Basically, thick EA would involve a wide-ranging set of commitments or organizing one’s life around EA ideas, whereas thin EA might mean just accepting the principle that it makes sense to do the most good.
Though there might be competing ways to slice EA into thick and thin.
I was going to (at the very least) start a discussion on a similar dichotomy. At the 2014 Effective Altruism Summit, heads of different effective altruist organizations differed on how they believe effective altruism can best grow as a movement. The following is what I perceived:
William MacAskill of the Centre of Effective Altruism believes effective altruism should grow bigger, with many more people get involved, perhaps to the point at which effective altruism hits critical mass as a social movement.
Anna Salamon of the Center for Applied Rationality believes getting individual effective altruists to be more effective should dominate as an approach over trying to grow to a point of critical mass, in the present. It seems Ms. Salamon is concerned that if effective altruism ‘goes viral’, it may be diluted to the point at which the signal:noise ratio becomes too low for effective coordination to take place. It seems Ms. Salamon would prefer effective altruism be ‘thick’ rather than ‘thin’, although pertaining to lifestyle strategies more than principled commitments.
Geoff Anders of Leverage Research seemed to prefer an approach between those of MacAskill and Salamon, though I found his position somewhat inscrutable.
Personally, I agree with Anna Salamon the most, as it seems appropriately conservative, though I’ll be excited to see MacAskill, or Anders, proven correct. The plans presented at the effective altruism summit were about what effective altruism might do in the near-term, i.e., next year or two. In the long-term, I believe “slicing effective altruism into competing ways of thick and thin” will be the best approach.
I see no reason why EA cannot outreach broadly and stay true to its principles. Give Well I have heard described after a two minute explanation of what it did as Go Compare for charities (a price comparison site). Most people will be no more interested in the detail of how Give Well comes up with charities it does than how Go Compare works. To encourage greater giving a broad based genuine community needs to be created that makes greater giving normal. That there is an academic core is great, but that won’t be for many people. So I think the effect can remain thick, whilst the outreach can be broad.
There are a couple of reasons for the concern that effective altruism may grow too big too fast:
If effective altruism ‘goes viral’, some of us worry that it will become decentralized too quickly for the principles to adjust, and then an ‘effective altruist’ would just be anybody ‘who donates $10 to Oxfam’. That would counteract effective altruism’s initial mission.
This is just my speculation, but I believe effective altruists who believe the best cause for bettering the world is an unconventional one. If effective altruism grows too fast for anyone to stay on top of, unconventional but important causes may be discarded. If effective altruism was a grassroots movement supported by millions, we can’t be sure that a cause like ensuring superhuman machine intelligence will safeguard humanity, or ending factory farming, would be (sufficiently) supported.
Note that these aren’t my own opinions, and I’m just reporting my impression from the Effective Altruism Summit.
Vipul, I believe this discussion will happen at a wider, better scale if its given a post in its own right. If you (help) get something like that up on the site, I’ll share the link, provide feedback, and help the conversation get started.
The EA community seems to have a relatively weak internal support system, relative to other communities of mine (of similar size). I’ve confirmed this with other EAs. This is in terms of mentorship, providing opportunities for engagement, etc. For example, I think more people participating on this forum strengthens our internal support system! :)
Why is this? And what can we do to improve the situation (if anything)?
What are some of these other communities of similar size that you have in mind?
Animal rights, service organizations (e.g. Rotary Club (not of the same size, just an example of a service organization), churches, issue-focused groups (e.g. anti-abortion, Palestine solidarity groups), Marxism, Objectivism, women’s rights groups, etc. A lot of other social justice groups. You could also even include hobby groups (e.g. a local soccer club).
Note that there are obvious differences with any of these groups (e.g. churches are local, EA is global), but there are meaningful similarities (e.g. a local EA chapter is similar to a local Marxism chapter).
I think EA could gain a lot from strengthening these support systems, both in the global and local sense.
What do you mean by “internal support system”? I’ve personally found the EA community to have way more internal support system than other communities I’ve been involved in (atheism, rationalism, animal rights).
More meaningful interaction that increases social identity (e.g. Imagine you attend a local meeting of a Marxist group. Immediately someone greets you and asks about your background and what reservations you had about coming.) This has happened to me, and definitely strengthened my identity with that group.
For which other groups? It might be a matter of different people’s experiences. Or maybe EAs are a bit less savvy at social interaction than other groups. ;) The movement has been accused of being unwelcoming before.
I think to outreach more broadly support mechanisms become crucial. For much of the founding community altruism has been a matter of philosophical choice. For others coming in from a broader background being part of a community that values altruism and happiness will help transition to more generous giving. Normalising generous effective giving will require support. Churches as an example are all about support, but it has made them very introverted. The balance is to make the support and community conditional on the giving and the outward looking.
Agreed.
[Discount Rates]
I’d like to hear more EA discussion on discount rates. Much of policy analysis involves unilaterally discounting future benefits. For example, an economist might say “Let’s value eating one apple today the same as eating four apples ten years from now.” The professionals I’ve spoken with who do this sort of analysis say that discounting is justified because it’s a natural part of human decision-making. Psychologically, it’s pretty clear that most people make decisions given much greater weight to instant or near-term gratification.
However, I’d be more likely to put this under a ‘cognitive bias’ than a ‘terminal value.’ I think most people, upon deep reflection, would realize that eating an apple is eating an apple just the same no matter when it happens.
Removing the (utility) discounting in policy analysis seems like it could do a lot of good for future people, who matter the just the same as we do. Under modern methods, we choose small amounts of short-term good in exchange for really bad outcomes decades or centuries from now.
Does anyone disagree? If not, how tractable is this goal?
Edit: Here’s a good piece on the topic: http://www.givingwhatwecan.org/blog/2013-04-04/was-tutankhamun-a-billion-times-more-important-than-you
There’s quite a bit of internal discussion on this at CEA.
There are several reasons for discounting. Some of them are quite correctly applied in social policy contexts, whereas some are not applicable (as the case you highlight, which is often called ‘pure rate of time preference’). They are also sometimes misapplied.
I do think that helping to make sure that discounting is done correctly according to context is an important goal, and this is something that the Global Priorities Project may push for. But trying to remove discounting altogether in analysis may harm future people rather than help them.
This paper by Dasgupta has some good discussion of the different purposes of discounting (but I wouldn’t take too much from its discussion of eta).
In addition to upvoting, I want to mention that strikes me as something very worthwhile for the Global Priorities Project to try.
Obviously, if I’m going to die unless I eat that apple in the next ten minutes, the apple has extremely high value now and zero after 10 minutes.
Extending that idea, you are integrating across all the probabilities that the apple will become useless or reduce in value between now and when you’re going to get it.
Why is it exponential? Maybe stretching a bit, but I would guess that the apple changes in value according to a poisson process 1 where the dominating force is “the apple becomes useless to you”.
Right, but this is a special case and not an argument for a general discount rate.
Yes, heuristic-based discount rates like this (e.g. due to general uncertainty) are helpful and should be applied when necessary. But that’s different from just a utility discount rate (i.e. future identical events are just less valuable).
Sure :)
Many assets have compounding value (e.g., interest) that comes from owning things earlier. But I don’t think human life is one of those things.
There are instrumental effects of saving a life earlier or later in time. It’s not clear to me which should be better (and this may change over time), but it seems quite plausible that there should be a small (I’d guess well under 1% p.a.) positive or negative discount rate on this.
It’s worth pointing out that lives saved now are in a better position to save more lives (c.f., flow-through effects).
Right. On the other hand people later in time generally have higher productivity, so perhaps they’d be able to achieve more. This could be a bigger or smaller effect (although if forced to guess I’d marginally prefer the life now to one later).
We can also consider how lives saved now will save more future lives, leading to more achievement with even higher productivity. It seems like it might be turtles all the way down. Figuring out flow-through effects of present lives saved versus the discount rate value of future lives seems difficult.
The turtles all the way down problem is something which crops up in many cases looking at growth.
The basic way to deal with it is usually to sidestep it: so rather than cash out in some terminal units (like number of lives saved through history), convert everything into a unit we can get a grip on (e.g. as good as saving how many lives in 2014). Of course that can be dependent on hard-to-estimate figures, but they’re at least empirical figures which relate to near-term consequences.
I’m not sure exactly what you guys mean by turtles all the way down but I have some relevant links. Do you mean that growth continues indefinitely? Nick Beckstead has argued that it should not. A related concept is the question of whether we should be always favouring saving future lives rather than consuming resources. Seth Baum has argued that we should not.
‘Turtles all the way down’ is a silly metaphor from philosophy, and cosmology, representing a difficult premise, similar to Schrodinger’s Cat. It refers to the problem of the ‘prime mover’, or ‘first cause’, in the universe, e.g., who created God?, what happened before the Big Bang?, etc. The idea is that it’s so absurd to figure out what the absolute origin of everything is that the world might as well rest on the back of a turtle, who itself sits upon an infinite pile of turtles below it.
The analogy isn’t perfect, I admit. What I meant is this:
There’s a trade-off between saving lives in the present due to the flow-through effects they’ll have in terms of saving lives in the future, and just saving a greater number of lives in the far future.
Time is so indiscrete, and the world so full of variables, that I can’t think of how to solve to problem of how much do we neglect saving lives at one point in the present or near-future, for the purpose of saving lives in the future further ahead.
Finding the perfect slice(s) of time to focus upon seems like trying to get to the bottom of an infinite stack of turtles to me.
Yes, I’m familiar with ‘turtles all the way down’ in general. For this question of finding the ideal time-slice to focus on, it’s the second link (Seth Baum’s post) that is relevant. He addresses the issue of an indefinitely postponed splurge—the idea that you might always have to wait before consuming goods by countering that we are consuming all of the time, just by staying alive.
That’s a philosophical counter but I could also just give a more practical one—there are plenty of other people who will fuel consumption. If you think that the future is neglected, then you don’t need to have an exact plan for when consumption should occur in order to invest in it.
Thanks, noted. That makes sense.
Fair question. I was meaning something like growth continues indefinitely.
If I wanted a careful statement I’d say it wasn’t turtles all the way down (as Nick Beckstead argues), but that it’s turtles down as far as we can see. For many practical purposes these are indistinguishable in terms of raising problems we need new methods for thinking about—though it does kill some arguments which try to use the tail of the “all the way down” assumption.
In a similar vein, infinity can often be a good working approximation for very large finite numbers—but if you treat that literally and start trying to play Hotel Infinity tricks, you get in trouble.
[Replaceability in Social Change]
When we talk about social change like improving policy or promoting an idea like effective altruism, how do we figure out the counterfactual to measure our impact? Say I’m a civil rights activist in the 1950′s and I really want to give a speech titled “I Have a Dream.” How would I determine if someone else would do something similar (like MLK actually did in 1963)?
In order words, what social change is inevitable and what is more malleable? In posting this comment, did I just make the idea of “Replaceability in Social Change” come into the EA idea-sphere sooner than it would have otherwise.. or is the counterfactual that we wouldn’t even consider this idea? What general frameworks can we use for answering this question, and how do we avoid hindsight bias?
To answer your object-level question, I believe that something resembling the idea of “Replaceability of Social Change” would enter our minds eventually. For the last few months, there has been conversation in the Facebook group about the importance of, and how to assess, the history of social movements. Additionally, groups like the Open Philanthropy Project are pushing forward the idea of assessing political change. From our point of view, variable change will affect the probability that a certain idea will come to us for any given point in time. As we discussed other ideas factoring into this one, over time, it seems like we might eventually come across it.
Anyway, you’re trying to figure out what important social change we would want that would not happen, or fail, if we didn’t intervene. I agree with Robby Bensinger that all of history added up together is very fragile, with small changes to the initial state of any point possibly causing huge changes later. So, again, I believe this is difficult, and that we need to get more specific.
Thanks for asking these tough questions. I appreciate it.
I’ve given this some more thought, and I think I’ve at least partially explained why my model of social change involves less fragility than others’. I think of modern human society similarly to how I think about evolutionary human society (i.e. when we still faced obvious natural selection pressures) and similarly to how I think about evolution as a whole. In biology, it’s in some way true that “All evolution is random,” in that mutations in genetic code are arbitrary. I think of people in this same way. Yes, I agree that what MLK did in particular probably had causation in random things like his birthday or a specific event in his childhood or other randomness, but on the macro-scale, this randomness evens out in some sense and all possible micro-worlds converge, like they (mostly) do in evolution.
To specify what you mean, I believe the important idea is a question: how can we use replaceability to measure the impact (or expected value) of social change?
I believe traction can be made towards solving this problem, but I still believe it will be difficult. I notice that you use the historical example of Martin Luther King, Jr.‘s activism, but you want to measure social impact in the present, or for the future. However, I notice that I can’t recall reading about anyone, effective altruists or otherwise, ever measuring the impact of social change in the past. Like, we take Martin Luther King, Jr.’s success for granted, even though we haven’t tried quantifying it at all. We could then try comparing King’s success to other activist strategies: those which worked well, and others less so.
In measuring the impact of historical efforts, we are measuring in a field where we have access to all the measurable data. So, that makes studying history good practice grounds for figuring out what to look for in estimating social impact in the future.
Here’s some suggestions for doing that:
One can learn how to measure anything, and practice for free, which should help in measuring something as abstract as the impact of social change. I believe trying to measure other intangibles as practice a bit will give one experience to gauge how to measure the impact of social change. From there, realizing getting some kind, any kind of metric(s) placed on what you’re trying to measure in social change is better than having no metric(s).
One can also study Givewell’s History of Philanthropy Project to get a sense of how they determined which historical points were worth considering. Then, you can try replicating it a bit in studying the history of social change. Note that Givewell’s project focuses upon the history of philanthropy in the United States, which might be a narrower and simpler section to assess than the history of social change.
In regards to determining where to start with at a point in history, asking some historians, and the community, what heuristics they would use could help. Givewell itself hired some actual historians.
The guide How To Measure Anything may give some ideas, but I believe the metric you’ll be developing for social impact will be more similar to metrics used in global studies or reports from the WHO, or other NGOs, or in charity evaluation. Perhaps review the field of sociology for any ideas it has for measuring social change.
This is a framework for figuring out how to measure the impact that we must create for ourselves. Frankly, the idea of replaceability, let measurement, in social change might be rare enough that we might be the first ones to put that method together.
Most EA giving advice is directed at people in the developed world, where purchasing power parity differences make their money go farther overseas than it would at home. For a person who’s equally wealthy in PPP terms but lives in a country where prices are lower (such as India), so that the person doesn’t have that much money when viewed at the international exchange rate, how does the calculus of giving change?
Quick answer: I think it changes a bit, but not too much. PPP adjustments aren’t an enormous factor compared to the wealth disparity between rich and poor (which is what drives a lot of the conclusions).
As part of the pros/cons between “give now” and “invest now, give later”, has there been any investigation into how much good is accomplished by investing itself? It seems like that is a (small) part of economic growth and innovation, so I’m curious if there’s much reason to think that has a big enough impact to include in the invest-then-give decision.
I guess if you invest, you slightly increase the availability of funds for businesses, which slightly grows the economy. And presumably, this slightly increases wealth in your country, in a way that persists over time with usual growth rates. Most investments are probably pretty replaceable though.
By this, do you just mean that the selection of one investment over another usually doesn’t matter? Or are you saying overall investment is replaceable in some way, i.e. my increased investment can lead to someone else investing less?
What I had in mind was the latter. My intuitions clearly point that way in a smaller pool of investment. Take venture capital. If you add $1m of funding, then the low hanging fruit might already be picked, and so you go broke. Alternatively, the other companies will run out of promising startups to fund, and will pull some of their money. Probably, the same thing would happen on a larger scale with the entire US economy.
On the other hand, if the startup is one that wasn’t able to get funding elsewhere, then your contribution is truly irreplaceable.
I’m no economist though.
Have there been movements broadly like EA before? What happened to them? More generally, why have these ideas become so popular now as opposed to a few decades ago?
I believe the advent of the Internet has allowed scholars into abstract ideas like effective altruism (and associated communities, and cause areas) to coordinate their advocacy, and research, better than ever before. As they find one another, they build a unified and solid front which draws more attention to itself than any lone researcher or advocate could. From there, and for the last few years, the attention snowballs to form a concerned community. Larger, more frequent discussions with more publicity, and better information, accelerate the generation of new ideas.
This recursively happened until a movement formed. Again, this is enabled by the Internet, especially in the last decade, what with Wikipedia, and better search engines, which allow individuals with unconventional ideas to discover others who share similar ideas like never before.
I’d like to see more work done on “warm fuzzies,” e.g.: How can our charitable organizations be competitive with non-EA charities in producing positive feelings in donors? How can our message be framed such that they don’t lead to feelings of guilt or a sense of being overwhelmed by the scope of the problems we’re trying to tackle?
Cause areas such as global poverty, public health, and animal advocacy already have sick children, and cute baby animals, for charities to tell donors they’re heroically saving. That gets lots of warm fuzzies. Lots of other charities engage their donors, especially larger and more regular donors, and the general community they’re in touch with. This includes fostering a sense of community, expressing gratitude, and evoking positive imagery for them being involved. Effective altruist organizations that I’m aware do this are the Machine Intelligence Research Institute, the Center for Applied Rationality, Giving What We Can, and The Life You Can Save. One effective altruist organization I wish would do more and better community engagement is 80,000 Hours.
For more abstract cause areas, like helping huge populations that exist in the far future, Brienne Strohl of the CFAR and Leverage Research wrote a guide for Explaining Effective Altruism to System 1 in each of our own brains. Figuring out how to induce an affect like this in donors across multiple mediums of expression could be useful. I don’t know how to do that, so maybe try contacting Ms. Strohl about it.
I’m looking for more concrete suggestions that orgs could (and would hopefully be willing to) A/B test. Most of the charities EAs are encouraged to support do help sick/suffering children/animals, but I don’t think they’re taking advantage of it in the same ways the mainstream orgs are (nor are the meta-charities/evaluators that are pitching them).
About five years ago, a family member donated to SmileTrain on my behalf. I received a sheet of before-and-after photos of a child who’d had the cleft palate surgery with his first name and the date of his surgery written underneath. I had an extremely positive emotional response to this and ended up pinning it to my fridge, where roommates and house guests saw it on a daily basis. I still have more visceral happy-feels for SmileTrain than for most of the charities I support now.
I’d love to see EA groups running experiments on that sort of thing.
I understand what’s going on:
Evoking warm fuzzies from others rather than feelings of guilt or being overwhelmed probably works better.
Effective altruism doesn’t work best by mechanically telling others ‘look how effective this altruism is!’. Promoting effectiveness among donors is important as well, and doing so with positive reinforcement could be more effective than what we’re currently doing.
Effective altruists should run experiments with this to figure out what works best.
Mason, yours is a worthy concern, and it’s not enough to have it buried in a comment thread. The problem isn’t getting solved, so let’s make an open call for effective altruists to experiment. I’ll write a post about this. If anyone wants to get involved, or provide feedback, send me a private message.
That Effective Altruists, implicitly if not explicitly, nearly always assume a single moral epistemology: some version of utilitarianism. It is only one of very many plausible registers of human value, whose prominence in the Anglophone academy has long waned post-Rawls (nevermind on the continent). I find the fact that this is a silent unanimity, tacit but never raised to the level of explicit discussion, doubly problematic.
I say this as someone who completely rejects utilitarianism, but recognises the obvious and ecumenical value in gauging high-utility giving opportunities and donating accordingly, i.e. as an analytic proxy for interpersonal comparisons, which can guide my (non-utilitarian) want to maximally remedy unnecessary human indigence.
A more accurate characterization, I think, is to say that many or most EAs are consequentialists; utilitarianism is a more specific position that only a subset of consequentialists (and EAs) endorse.
Note that about one quarter of respondents in the recent PhilPapers survey accept or lean towards consequentialism; the remaining three quarters are roughly equally divided between those who accept or lean towards either deontology or virtue ethics, and those who endorse some other moral position. So I think you are exaggerating a bit the tension between the moral views of EAs and those of professional philosophers.
Finally, consequentialism has a feature that makes it unique among rival plausible moral views, namely, that all such views agree that good outcomes are at least part of what matters morally. Consequentialists take the further step of claiming that good outcomes are the only thing that matters. (By contrast, there is no component in deontology of virtue ethics that is shared by all other rival views, other than the consequentialist component.) It follows from this feature that research on what consequentialism implies has, in principle, relevance for all other theories, since such theories could be understood as issuing requirements that coincide with those of consequentialism except when they come into conflict with other requirements that they may issue (e.g., for some forms of deontology, you should maximize good unless this violates people’s rights, so when no rights are violated these theories imply that you should act as a consequentialist).
In modern discourse, varieties of consequentialism and utilitarianism have family resemblance sufficient to warrant their interchange in utterance, in my opinion. If you think otherwise, and mark the relevant function of their distinction, I will observe it.
As for the substance of your point:
(i) in terms of its marginality, 23% (third out of four, one long dead) is appreciable but hardly impressive, given its absence from the neighbouring, larger field of political philosophy, to which I alluded (which, in the poll you cite, doesn’t include utilitarianism as an option). Moreover, if you look at the normative books achieving most (top 15) citations in post-war Anglophone philosophy, utilitarianism is absent: Rawls’ A Theory of Justice (26,768), Dworkin’s Taking Rights Seriously (7,892), MacIntyre’s After Virtue (6,579), Rawls’ Political Liberalism (6,352), Nozick’s Anarchy, State, and Utopia (6,246). The first possible utilitarian is all the way down at 30, at Parfit’s Reasons and Persons, with just 2,972 citations (which no one would ever call utilitarian, and which is only very partially ethically concerned at that). That is to say, liberal egalitarianism (Rawls seconded by Dworkin) is completely dominant, with Aristotelianism (MacIntyre) and libertarianism (Nozick) trailing. Of course, most citations of MacIntyre probably affirm his positive argument of the failure of the Enlightenment project, and reject his substitute reversion to Aristotelianism. In that sense, it might even be a two-horse race (although, again, it’s not really a race: liberal egalitarianism boasts over 40,000 citations between the three works above, libertarianism just 6,000). I should also add that the other lead works are not favourable to the whole enterprise of ethics: Wittgenstein, Rorty, Kuhn and so forth. If you allow the continent, Foucault and Sartre shoot to the top and below Rawls respectively, at the very least, I imagine (Beauvoir’s The Second Sex probably ranks as well).
Reference: http://leiterreports.typepad.com/blog/2009/11/the-most-cited-books-in-postwwii-anglophone-philosophy.html
(ii) I agree that researching optimum means of bringing about ones preferred unit of consequence can well integrate with a wider plurality of values; my issue is internal to the movement however, as I have discussed above with some elaboration
Your original claim concerned moral philosophy, but the evidence you provide in your latest comment predominantly concerns political philosophy. Consequentialism (a moral view) is compatible with liberalism (a political view), so evidence for the popularity of liberalism is not itself evidence for the unpopularity of consequentialism.
Furthermore, a representative poll where professional philosophers can state their preferred moral views directly seems to be a better measure of the relative popularity of those views in the philosophy profession than citation counts of books published over a given time period. The latter may be relied upon as an imperfect proxy for the former in the absence of poll data, but their evidential relevance diminishes considerably once such data becomes available.
Note, too, that using your criterion we should conclude that falsificationism—advocated in Conjectures and Refutations and Scientific Knowledge—is the dominant position in philosophy of science, when it is in fact moribund. Similarly, that ranking would misleadingly suggest that eliminative materialism—advocated in Consciousness Explained—is the dominant view in philosophy of mind, when this isn’t at all the case. In fact, many if not most of the books cited in that ranking represent positions that have largely fallen out of favor in contemporary analytic philosophy; this is at least the case with Kuhn, MacIntyre, Ryle, Rorty, Searle and maybe Fodor, besides Popper and Dennett. In addition, owing to discrepancies in the number of philosophers who work in different philosophical areas and the popularity of some of these areas in disciplines outside philosophy, the poll grossly overrepresents some areas (political philosophy, philosophy of science, philosophy of mind) and underrepresents others of at least comparable importance (metaphysics, epistemology, normative ethics), strongly suggesting that it is particularly ill-suited for comparisons spanning multiple areas (such as one involving both normative ethics and political philosophy) and further strengthening the case for relying on poll data over citation counts.
Let me however highlight that I agree with you that the high prevalence of consequentialists in the EA movement is a striking fact that raises various concerns and certainly deserves further thought and study.
I’m another effective altruist who is explicitly not a utilitarian. Having followed discussion on, e.g., the ‘Effective Altruists’ Facebook group for a long time, I’ve been part of discussions over whether effective altruism is tantamount to utilitarianism arise, and non-utilitarians come out of the wormwood. It hasn’t happened very often though.
I like the points raised from the link Josh You provided below.
If virtue ethics, and deontology, also both contain a quintessential consequentialist element, then maybe they can just be considered convoluted variants of consequentialism with other fundamental principles installed along with it. If so, perhaps effective altruism can be considered another set of (frameworks for) ethics involving consequentialism, but also valuing other things. Building a framework like that might be necessary for humans because we weren’t built to be very effective utilitarians. I believe effective altruism was started by lots of different types of utilitarians, but also non-philosophers who were affected by principles and heuristics effective altruists are designing.
William Macaskill makes a few good points here about why EA does not rely on utilitarianism. It’s true that a lot of EAs are utilitarian, but I’ve seen plenty of discussions on normative ethics among EA circles, so I wouldn’t describe it as a silent unanimity.
I would, as I already have, readily admit that EA is of ecumenical moral interest. Its practitioners, however, are overwhelmingly of a singular stripe. I have certainly never heard it discussed, having followed and somewhat intermingled with the community for some time.
I think it’s fair to say that effective altruists don’t discuss “the fact that they’re predominantly utilitarian” very much and that might seem kind of sinister on the surface but I’m not quite sure how they’re supposed to discuss this topic. They could do a mea culpa and apologise for their lack of philosophical diversity but this seems inappropriate. Alternatively, they could analyse utilitarianism in detail, which also seems wrong. What they have done is make a few public statements that in-principle, EA is more inclusive than that, which seems like a good first step. Is there much more that urgently needs to be done?
[I rearranged this to put the last paragraph first, because it gives the most concise and direct attention to my point of concern]
Let me put the question of tactics, with simplification, this way: insofar as you admit that optimising units of consequences is a subset of the panoply of moral obligations one faces, two things appear true. (i) externally, an organisation claiming merely to evaluate the best means of increasing valuable units of consequence per donation appears unproblematic; it facilitates your meeting of part of your moral obligations; (ii) an organisation internally operating, across its management, personnel and dissemination, with the sole goal of maximising valuable units of consequence per available resource, excludes the full range of the human values you recognise. Note that something similar holds for the internal composition of the movement. That is to say, while the movement might outwardly facilitate value pluralism, internally in organisation and composition, it abides by an almost singular logic. That can be extremely alienating for someone who doesn’t share that world-view, like myself.
I don’t encounter it as sinister in the slightest. I feel respondents are running away with the possible allusions or intended implications of my post. The EA community is seething with a very particular and on the whole homogeneous identity, a caricature of which might be drawn thus: a rigorous concern with instrumental rationality, with conforming available techniques and resources with given ends; an associated, marked favouring of analytically tractable meads/ends; and an unsophisticated intuitionistic or simply assumed utilitarianism, augmented in a complementary naturalistic world-view.
There is a whole lot to value there, exemplified well enough in the movement’s results. I do find two things alienating, however: the rationalization of the whole human experience, such that one is merely a teleological vessel to the satisfaction of the obvious and absolute good of benefits over costs (which for at least the few ‘professional’ members of the movement I have encountered, sits squarely alongside neoclassical economic orthodoxy); and the failure to ever talk about or admit human values other than the preferred unit of consequence.
I should stress immediately, contrary to the sentiment of your (generous) reply, these are largely experiences of individuals. I occasionally find that it contaminates analysis itself: such as in inter-generational comparisons (i.e. FHI’s straight-faced contemplation of the value of totalitarianism in guarding against xrisk), or tactical questions of how to best disseminate EA (i.e. again, in caricature: ‘say and do whatever most favourably brings about the desired reaction’). But for the most part, it does not make donor-relevant analysis problematic for me.
I want to say two things then: (i) that I find something problematic in an absolutising rationalization without great reflection; with being highly adept in means, without giving pause to properly consider ends. (ii) with the dominance this tendency has internal to the movement. (i) is a question of personal world-view adequacy, (ii) is one of organisational adequacy. Obviously I don’t expect those affirming (i) or its cognates to agree, but I do think (ii) has significance regardless of whether one observes or rejects it. Namely, for the idea, suggested in this thread, that the movement can both present itself as only attempting to satisfy an important subset of possible moral values, while being internally monological. You might readily accept this, but it is consequential for the limits of the movement’s membership at least.
*to repeat thrice for want to avoid misunderstanding and too heavy a flurry of down-votes, I readily admit that the study of maximising favoured consequences is of ecumenical interest, and is sufficient in itself to warrant its organisational study.
I partially agree here. The parts that I find easiest to agree with relate to exclusion of none utilitarians. I think it’s important that people who are not utilitarian can enter effective altruist circles and participate in discussions. I think it also might be good for effective altruists to pull back from their utilitarian frame of analysis and take a more global view of how their proposals (e.g. totalitarianism as a reducer of x-risk) might be perceived from a broader value system, if for no reason other than ensuring their research remainsbof wider societal interest. FHI would argue that they already do a lot of this, for example, in his thesis, Nick Beckstead argued that he the importance of the far future goes trough on a variety of moral theories,not just classical utilitarianism. But they have some room to improve.
I find it harder to sympathize with the view that effective altruists are collecting a a certain moral perspective unreflectively. I think most have read some ethics abd metaethics and some have read more than the average philosophy major. So the ‘naive’ and simple view can be held by a sophisticated reader.
My last suggestion is that given that the focus of effective altruism is how to do good, its only natural that its earliest adopters are consequentialist. If one thinks that different value systems converge in a lot of developing world or existential risk-related problems, then it might be appropriate to focus on the ‘how’ questions rather than trying harder to pin down a more precise notion of good. As the movement grows, one hopes that the values of its constituency will broaden.
If your non-utilitarianism makes you “want to maximally remedy unnecessary human indigence”, and my utilitarianism* makes me want the same, then what is the issue? It seems that at an operational level, we both want the same thing.
It just seems obvious to me that, all other things equal, helping two people is better than helping one. If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.
*I’m not purely utilitarian, but I am when it comes to donating.
That sentence you quoted doesn’t exhaust my normativity, but marks the extent of it which motivates my interest in EA. The word ‘maximally’ is very unclear here; I mean maximally internal to my giving, not throughout every minutia of my consciousness and actions.
The issue I wanted to raise was several-fold: that very many effective altruists take as obvious and unproblematic that utilitarianism does exhaust human value, which is reinforced by the fact that almost no one speaks to this point; that it seriously effects the evaluation of outcomes (i.e. the xrisk community, including if not especially Nick Bostrom, speak with a straight-face about totalitarianism as a condition of controlling nanotechnology and artificial intelligence); and the tactics for satisfying those outcomes.
In regard to the last point, in response to a user suggesting that we should reshape our identity, presentation and justification when speaking to conservatives, in order to effectively bring them to altruism, I posted:
“I find the this kind of rationalization—subordinating ones ethics to what can effectively motivate people to altruism—both profoundly conservative and, to some extent, undignified and inhuman, i.e. the utility slave coming full circle to enslave their own dictate of utility maximisation.”
That kind of thinking, however, is extremely common.
In response to your second paragraph:
“It just seems obvious to me that, all other things equal, helping two people is better than helping one.”
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
“If various moral theories favoured by academics don’t reach that conclusion, then so much worse for them; if they do reach that conclusion, then all the better. And in the latter case, the precise formulations of the theories matter very little to me.”
I find this perhaps culpable of wishful thinking; insofar as it would be nice if the natural structure of the world inhered an objective morality dovetailing with my historically specific intuitions and attitudes, that doesn’t itself vindicate it as so. More often that not, the imposition of the latter on the former occurs. Something seeming obvious to oneself isn’t premise for its truth.
If you follow the history of utilitarianism, it is a history of increasing dilution, from the moral naturalism of Bentham’s conception of a unified human good psychologically motivating all human action, to Mill’s pluralising of that good, to Sidgwick’s wholesale rejection of naturalism and value commensurability, and argument that the only register of independent human valuation is mere intuition, to Moore’s final reductio of the tradition in Principia Ethica (‘morality consists in a non-natural good, whatever I feel it to be, but by the way, aesthetics and interpersonal enjoyment are far and away superior’). Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition. Hence the climb of Rawls and liberal egalitarianism to predominance in the academy; it simply better satisfies the historical values and ideology of the here and now.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.
Please reply with a description of Effective Altruism that you think optimizes for, in priority, conciseness, likelihood of compelling the reader to learn more, and comprehensiveness.
After we get a sufficient number, I’ll repost them all at the same time as a poll.
This Forum May Offer Better, Newer Formats For Interviews
An initial interview with one effective altruist lets the rest of us know what that one person is up to. However, for individuals working on particularly deep and interesting projects, I want to know more than just what they’re doing. I want to know why, and how. For example: I might want to know more about what Brian Tomasik is doing with the Foundational Research Institute, or what Owen Cotton-Barrat is doing with the Global Priorities Project. That might require a second interview, or one that interacts with third parties.
However, going back and forth on the forum so much may be use too much of the valuable time such folks have in limited supply. Here’s a proposal for more effective interviews:
A call could be made requesting questions to be asked of a certain individual’s work lots of effective altruists want to learn about.
As effective altruists pose questions for the interviewee, the rest of us could select which questions we want answered most. This could be indicated through polling, or upvotes, or something.
The interviewee could take a look at the questions for which there is the most demand. They could choose a certain number to answer, or answer whichever questions they like.
I believe this could be a great way to make conversations public. This can make for more efficient public conversations, facilitating dozens of people’s ideas, rather than only those of a few.
I like this. It’s similar to how AMA’s work on Reddit, right? EAs should do AMAs :)
Oh, right, that’s a usefully existing format I totally forgot about. I suppose it would make sense for us to do AMAs here, but they could also be done on an effective altruist subreddit with lots of followers as well.
Over at LessWrong, user “mushroom” recently proposed a debiasing heuristic for dealing with unpopular ideas. In sum, his claim is that we should be extra charitable to such ideas because they are disproportionately more likely to be promoted by its most extreme, disagreeable or crazy adherents. In a comment, I wrote:
Do you find “mushroom”’s plausible? And do you agree that it has implications for our movement? If so, are there specific steps we can take to address this worry?
I find mushroom’s heuristic plausible, and I believe it may have such implications for effective altruism, but I’m not confident of that.
I’m concerned that effective altruism won’t become centralized enough to stop its ideas from being misrepresented by vocal individuals (with a large audience). I’m also concerned that whoever bears such power may liable to abusing it. Quieting voices by accusing their owners of being ‘radical’, or ‘crazy’ is a stigmatizing and accusatory label. Even if abuse is only perceived rather than real, this could lead to individuals, or organizations, drawing battle lines over who is qualified to represent effective altruism. Accusations of abuses of power can also be stigmatizing, and can be skewed easily.
Here are my suggestions for avoiding your original worry, Pablo, and preventing (accusations of) abuse of power:
If we indeed believe that it is how an idea is being represented, rather than the idea, or the representing party, that is inadequate, we can make this explicit, so that critical feedback isn’t perceived as a (personal) attack.
If somebody tried establishing a center for effective altruism PR, I expect it would be the CEA, or its new project, Effective Altruism Outreach. In any case, whichever organization stakes that position needs to have lots of integrity. Not only is espousing transparency, impartiality, and integrity the right thing to do, cooperating as such can foster trust within the community that that organization is reliable. So, accusations that such a group is overstepping its unduly censoring someone will be less debilitating, and can be gauged on their merits.
Whatever part of the movement takes the responsibility to manage how ideas are represented should acknowledge their own mistakes, and biases, have some sort of oversight, and welcome feedback from the whole movement on why their actions take place. Ideally, I’d like to see such an organization being as transparent as Givewell. However, I understand if such bureaucracy would be crippling to a lean organization with sparse resources to spend.
Prioritizing outreach activities is a good idea by itself, but I believe another discussion will need to cover how valuable effective altruists believe that is relative to our current endeavors. I respect how William MacAskill, and the Centre for Effective Altruism, have handled media requests and outreach in the last year, and I hope however they’re doing it keeps working as they expand their efforts.
If an individual (re)presents effective altruism well, but is celebrated, or better known, for associating with something much more controversial, I’m not confident in how that should be handled. To use an extreme example, I don’t want effective altruism to be most associated with baby-eating terrorists. However, if someone is well-known as being of a sexual minority, or a transhumanist, or an atheist, I don’t feel comfortable with us telling that person to express that identity less. I might be comfortable with someone very politely asking them to express themselves a bit differently, with less of an edge of fueling outrage. However, that doesn’t seem necessary if in reality that identity is benign and non-threatening, as I believe will most likely be the case.
Check this out: https://www.eaforchristians.org/
What are the implications of Robin Hanson’s idea of being a charity angel for effective altruism(more details on charity angels here)? For the purposes of answering this question, don’t limit yourself to thinking about being a charity angel to only intellectuals, as Robin Hanson primed for discussion in his original post. Please think broadly about what being a charity angel could be for whatever effective altruistic endeavor you might have in mind, and whether it would be worthwhile.
I believe the most relevant previous thought is one concern raised by Holden Karnofsky on the original post.
Effective altruism doesn’t have a great enough stake in any charity market for it to be able to make a signal that wouldn’t be drowned out by the noise the rest of any philanthropy charity makes. That is, effective altruism doesn’t seem to have enough capital of any sort to incentivize effectiveness in charity on a massive scale, that wouldn’t be drowned out by all the other grants, and prizes, out there. Nobody is going to care about the ‘effective altruism prize’ when attention is being drawn to all the other ones.
However, existing effective altruist organizations don’t necessarily believe that the biggest impact must be made by making the biggest media splash in a non-profit sector. Effective altruist organizations such as 80,000 Hours, the Global Priorities Project, the MIRI, and Leverage Research are trying to produce new research that could be applied, and leveraged, for greater magnitude later. If an organization needs a proposed solution to a problem they’re tackling, but cannot figure it out for themselves, and are unable to acquire the talent who could do it for them, perhaps they could offer a prize incentivizing someone to produce responses.
I know that 80,000 Hours, and the CFAR, each have hundreds of members, and the membership of each of those organizations is poised to grow in the future. Those might be good channels for advertising a charity prize, then, especially because they’ll be directly broadcasted to audiences that are already part of effective altruism.
Any other thoughts on this?
In this thread, you try to argue as well as you can against the cause you currently consider the highest expected value cause to be working on. Then you calibrate your emotions given the new evidence you just generated. This is not just a fun exercise. It has been shown that if you want to get the moral intuitions of a person to change, the best way to do so is to cause the person to scrutinize in detail the policy they are in favor of, not show evidence for why other policies are a good idea or why the person is wrong. To get your mind to change, the best way is to give it a zooming lens into itself. So what is your cause?
This is an iterated version of Nick Bostrom’s technique of writing a hypothetical apostasy
The cause I currently think is most important is what I call “getting the order right”. It assumes that for all technological interventions that might drastically reshape the future, there are conditional dependencies on when they are discovered or invented, such that under different contexts and timelines, each would be significantly more or less dangerous, in X-risk terms. So, here is why this may not be the best cause:
To begin with, it seems plausible that the Tricky Expectation View discussed in pg-85 of Beckstead’s thesis holds despite his arguments. This would drastically reduce the overall importance of existential risk reduction. One way in which TEV would hold, or an argument for views in that family, comes from considering the set of all possible human minds, and noticing that many considerations, both probabilistic and moral, stop being intuitive when deciding whether to pluck out one of these infinitesimally small entity from non-existence to existence is actually a good deal. No matter what we do, most minds will never exist.
Depending on how we carve the conceptual distinction that determines a mind, we could get even lower orders of probability of existence for any given mind. Furthermore, if being of a different type (in the philosophical ‘type’ ‘token’ distinction) than something that has already existed is not a relevant distinction, the argument gets even easier: for each possible token of mind, that token will most likely never live with overwhelming chance.
If there are infinitesimally small differences between minds then there are at least Aleph1 non-existent minds, and Aleph2 non-existent mind tokens.
These infinities seem to point to some sort of asymmetric view, in which there is some form of affiliation with existence that is indeed correlated with being valuable. It may not be as straighforward as “only living minds matter”, or even *The Tricky Expectation View” but something in that vicinity. Some sort of discount rate that is fully justified, even in the face of astronomical waste, moral uncertainty etc. This would be one angle of attack.
Another angle is assuming that X-risk indeed trumps all other problems but that it can be reduced more efficiently by doing things other than figuring out the most desirable order. It may be that there are yet unknown anthropogenic X-risks, in which case focus on locating ways in which humans could soon destroy themselves would be more valuable than solving the known ones. An argument for that may take this form:
A) There are true relevant unknown facts about the Dung Beetle
B) Our bayesian shift on how many unknown unknowns are left in a domain should roughly correlate with amount of research that has already been done in a topic.
C) Substantially more research has been done on Dung Beetles than existential risks.
Conclusion: There are true unknown relevant facts about X-risk
‘Relevant’ here would range over [X-risks] which would mean either a substantial revision of conditional probabilites on different X-risks or else just a substantial revision on the whole network once an unkown risk is accounted for.
So getting the order right would be less relevant than spending resources on finding unknown unkowns.
Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).
Spreading EA to non-First World nations to take advantage of people’s preference for helping their own country. Lots of both rich and poor in BRICS these days.
Spreading EA to institutions and governments. I know CEA advised the UK government but I haven’t heard much about other governments or corporate giving (although I realize that only about 5% of donations come from business, with most of the rest being from individuals). Although I realize a critical mass of individuals probably needs to be reach before institutions start to change.
Spreading altruism by counteracting it’s opposing forces, mostly the high priority people put on dominance and conspicuous consumption. For example, if people become less materialistic than they can give more. Moreover, if a culture judges others less for “living simply” than it would allow people to give more without facing social consequences. There actually is a “minimalist” movement occurring these days. Some popular websites are devoted to it. Spreading minimalism would help EA.
Bringing more religious people onboard. If people see others at their church giving effectively than they will consider it as well. Many, many, religious people would only consider giving to charities of their own faith, and will never change. Recognizing this, why not strive to make Christian, Muslim, etc, charities more effective or start new effective religious charities?
On (1), I’m not convinced about spreading EA to developing nations is something effective altruists are currently equipped to do—the idea is currently most popular in the most elite universities, and its popularity diminishes significantly at mid-range universities. Among random wealthy individuals, it has some popularity but not a huge amount. It seems unlikely that developing nations are the best location for this kind of idea to gain a critical mass of support. However, I think there is a way to fulfil people’s preference for helping their own country. People who emigrate from developing to developed nations often send funds back home to relatively poorer family and friends. The overhead for such transfers can be reduced by software. One example of this, Wave, was founded by effective altruist Lincoln Quirk.
Yeah, I was thinking it would be down the line, as well.
By ‘spreading effective altruism’ do you mean ‘setting up charities doing effective work’ into developing countries? Because if so, it seems to me that spreading effective altruism as an idea throughout such countries by getting donors to support their own country might counteract spreading cosmopolitanism.
The Global Priorities Project is based out of the University of Oxford, a politically prestigious position affording the CEA the ability to get access to policymakers faster than other organizations might get access to their own governments.
I believe the very wealth and prosperity in nations which allows those nations to be more altruistic may be also the same forces which generate consumerist preferences in those nations. So, on a society-wide scale, spreading minimalist values might be on uphill battle. Still, on a more local level, among the people each of us knows personally, and who we’re in touch with as a movement, we can provide a solid front showing individuals being more minimalist like each of us individually, normalizing minimalism in our own communities.
Rossa Keefe-O’Donovan is a former researcher for Giving What We Can, and is currently studying a Ph.D. in development economics. I met him at the 2014 Effective Altruism Summit. We discussed that at some future point effective altruism may be able to affect changes in wide-scale organizations such as the Who Health Organization for a leveraged impact. The same could be done for large religious charities such as Red Cross, Red Crescent, or World Vision. One problem I foresee is that some religious charities may be more devout, zealous, or dogmatic about the work they’re doing. Thus, even if this is the case for a large religious charity which holds lots of potential for leveraged impact, it may be impracticable to convince them to change their tactics.
To affect religious people, it might take religious effective altruists. They could start organizations bridging effective altruism with their own religious communities, hold dialogues between effective altruists and their communities, and they could spread awareness. This may convince, for example, a Christian, or Muslim, to start a Christian, or Muslim, effective altruist organization, or to donate to an already existing effective charity.
**By ‘spreading effective altruism’ do you mean ‘setting up charities doing effective work’ into developing countries? Because if so, it seems to me that spreading effective altruism as an idea throughout such countries by getting donors to support their own country might counteract spreading cosmopolitanism.
Most Second World nations probably wouldn’t have the most effective interventions at helping humans, you’re right. But look at India and China, both countries have hundreds of millions in extreme poverty as well as millions of people with money to spend. I would think that having an EA organization in each of those countries that evaluated domestic charities, gave talks at universities, and sought out and promoted people earning to give to the media, would have a huge impact. Development expert Mal Warwick estimates there are 5 million organizations in the world helping the poor, mostly in the poor countries themselves, so the odds of India and China each having extremely effective charities would be very high in my estimation. And that’s not to mention that people in those countries can also donate to INGOs with operations in their own country. (I know both these countries already have charity evaluators but I haven’t been able to find out whether they are GiveWell or Charity Navigator types. The Indian one has an English website but it is currently down.)
**I believe the very wealth and prosperity in nations which allows those nations to be more altruistic may be also the same forces which generate consumerist preferences in those nations. So, on a society-wide scale, spreading minimalist values might be on uphill battle.
I don’t think minimalism would be received as weirder than being a serious EtGing, so it doesn’t make sense to me to write it off so quickly. There are already minimalist blogs with hundreds of thousands of unique monthly visitors – maybe if CEA gets in touch with them they will like EA and promote it on their blogs, in their books, and so on. The people reading minimalist books and websites would be more open to EA than the general public, I would presume. Also, an EA could make a minimalist website that focuses on minimalism with the odd mention of EA/EtG so as to get visitors that find the simple life interesting but don’t like to be preached to about donating more.
**Thus, even if this is the case for a large religious charity which holds lots of potential for leveraged impact, it may be impracticable to convince them to change their tactics.
Institutions in general are very slow to change, especially large ones, but I think that the non-profit sector can only ignore evidence-based interventions and effectiveness evaluation for so long. It’s like with the environmental movement. In the 90′s, environmentalism wasn’t as big, but in the 2000′s the public’s expectations have changed and now most companies have to at least claim they are sustainable just to stay relevant.
**To affect religious people, it might take religious effective altruists.
I think the only way EA will grow among a religion is if people see others in their religion doing it. A trickle will grow into a stream. I don’t think EA has any true weaknesses, I really believe that (as a philosophy, not as a movement), so it seems like just a matter of time before religious people start earning to give, donating more based on evidence, etc.
Tip: in the future, for commenting, if you want to show a paragraph, or unbroken portion of text, quoted, preface it with the “>” symbol without any spaces between it, and the first word.
I misunderstood you on what you meant by promoting some sort of effective altruism in developing countries. I understand now. I agree spreading effective altruism throughout China, and India, would make lots of sense.
Effective altruist Kristian Ronn and his friend have launched an effective altruist organization launching a project aimed at helping anyone figure out how to decrease their negative impact on the world. It’s called Normative], and it’s in a contest to be funded. I’m unsure if it’s non-profit, or for-profit. Click here to vote for it.
By all means we should still try. I think you’re right.
We already agree that it’s religious effective altruists who will likely cause effective altruism to grow greatly among different religions. I’m glad you’re so optimistic. I sincerely believe there isn’t much wrong with effective altruism either. It might be the first antifragile social movement I’ve ever been part of.
What are the best books related to altruism that you have seen? Which books mostly influenced your thinking as an EA?
Moral Tribes—Joshua Greene
Better Angels of Our Nature—Pinker
Mathematical Models of Social Evolution—McElreath and Boyd
Non-Zero—Wright
Intentional Stance—Dennett
Darwin Dangerous Idea—Dennett
Good and Real—Drescher
Mortals and Others—Russell
Proposed Roads to Freedom—Russell
Autobiography—Bertrand Russell
Superintelligence—Bostrom
Collapse—Diamond
Bonobo Handshake—Vanessa Woods
La Grammatologie—Derrida (this one was influential for how terribly innefective and non- altruist it is, which made me have an alarm for useless philosophy)
Is Personal Identity What Matters? - Derek Parfit (made me realize that there is as much reason to care about you, reading this, as there is to care about retired me, and it’s just cheaper to help others, far.)
One of the concepts that is currently gaining more traction among EA’s is that of Crucial Considerations.
Which considerations do you think will be more crucial for us to get right in the next ten years in order to produce a massively better world?
When will the results of the EA survey be released? They survey was underweigh in early may, and it’s now late September. I realize there were many problems with the survey (Gregory Lewis pointed out some pretty convincing ones), but a lot of EAs spent a lot of time filling it out, so we should at least get the raw data (of those who agreed to let their data be public) and summary stats.
I would like to see a visual and possibly interactive map of all organizations and projects related to Effective Altruism, and their relationships including hierarchy, funding, room for funding, members, potential scale/scope of impact with some general metric, etc (suggest other useful attributes, and links to similar maps).
Would this project be worth the investment?
EDIT: By map I mean something like a mind map, not geographical.
Of relevance, Rob Wiblin of CEA has previously listed cause prioritisation organisations, so you could start by talking to him about that.
There is already a list of effective altruists at the effective altruism hub. This can be supplemented with the old 80,000 Hours members list. Any attempt to create an additional master list would have a lot of work to do to convince others that it was not just going to be one additional separate incomplete list.
Thanks. This isn’t meant to list all individual EAs, but be an meta-representation of the playing field; see my replies to Evan.
Some parts of this project seem worth the investment, but not all of them, and not all the worthy parts at the same time. Essentially, we could build a map with basic facts at first, and add more to it, making it richer, as the data we’d be looking for becomes more accessible. Here’s my rationale for what’s out:
Effective altruist organizations have tended to have small(er) teams of staff, with only one, or two, executives. A link from the map to the staff page of an organization’s website would suffice for transparency, since there are so few levels of hierarchy in the first place.
Some effective altruist organizations have a membership base numbering in the hundreds, such as 80,000 Hours, The Life You Can Save, and the Center for Applied Rationality. So, listing all the members would be impractical. Also, I believe it would be too difficult to negotiate getting all the data one might want without betraying the privacy of the users.
Effective altruism is not nearly centralized enough for an official organization to coordinate this. The work would likely fall on .impact, which is already swamped with projects. The good news is that a one or more eager people is all it takes to get the ball rolling by working on their own projects they propose to .impact. The major investment would be time, and personal effort. It might be more difficult to get large sums of money for making this map, but there are some effective altruists who provide minor funds to individual effective altruist projects.
To start off with, I believe the map could be broken down into the four categories of organizations I delineated below, and the map could conceptually show relationships between them. For each organization, we would have a link to their mission statement, or about page, a link to their staff page, a blurb about their major accomplishments, and a blurb about their current work.
Not all effective altruist organizations have optimally organized, or publicly available information about:
funding
room for more funding
potential scale/scope of impact with some general metric, etc.
I would like to see effective altruist organizations publish this sort of material, but coordinating them all to do so would be a separate task from getting that information up on the map.
It’s good you asked this question, because it forced me to think about what’s feasible, and what’s not.
Regarding hierarchy; this was more of a meta-hierarchy of what projects might encompass the scope of others, etc, rather than official associations between the orgs.
I didn’t intend to link every person who identifies as an EA; rather just display which major players work on what projects.
I didn’t envision this as a huge project; I think even just plugging things into a mind map would be nice.
.impact is a volunteer force of effective altruists which work on projects like this. At some point they were working on a map of all the effective altruists meetups. The map you’re proposing would be different. I want to clarify some things:
This might be so broad that it wouldn’t belong all on one chart. This could refer to one of many types of groups:
Charities supported by effective altruists, but aren’t central to effective altruism, per se, such as the Against Malaria Foundation, Give Directly, and Farm Sanctuary.
Getting them involved in such a project might be more difficult because these organizations are committed to their individual goals outside of effective altruism. I imagine for some we could get the data if we made an agreement not to abuse it, to ensure the data’s security, and effective altruists did all the work of visualizing it.
Effective altruist organizations, which tend to do advocacy, or research, such as the Centre for Effective Altruism, Givewell, and the Center for Applied Rationality.
Projects would include informal groups such as .impact, or formal collaborations between two organizations such as the Global Priorities Project. There are frequent temporary collaborations between, e.g., the Center For Applied Rationality, and Leverage Research, that are informal, so I don’t believe they would make it on this map.
For-profit enterprises started by effective altruists with the intent to donate a large portion of the profits, or owners’ salaries, to effective charities. There are definitely a few, though I don’t know much about them, so try asking about them in a separate comment thread.
Right.
My core intention is to help visualize the “playing field” of everything EA-related, to aid with:
Deciding the best meta-charities to fund
Visualizing relationships, and how certain organizations could act as force-multipliers for others
Strategizing co-ordination between orgs and perhaps whether the right people are in the right places
In conversation with Rolandas, we wondered whether there are enough EAs learning programming to justify creating a dedicated Facebook or Google group. If you would be interested in participating in such a group, please leave a comment or contact me privately.
Peter Hurford, and Ozzie Gooen, coordinate .impact, so I believe they might be the best people to ask about an estimate about how many effective altruists are learning programming interactively with others. Maybe try asking whatever staff member from 80,000 Hours who might be able to estimate this as well.
Can anyone think of a way effective altruists, as a group, or as individuals, can playtest (their own) different approaches to spreading effective altruism, whether among the people they know personally, or to the public at large? Also, how could any of us go about about assessing and comparing the impact of such a thing? Is there an experimental design in this we could set up?
Absolutely. This is how all marketing campaigns work, and I am going to be starting on this shortly, it would be great to have you involved. In terms of the approach as Peter Singer said—Effective Altruism is a movement for the head and the heart. In outreach you need to use the head to capture the heart. I am working on the outreach strategy for Sustainable Human which has a following of 1.3 million on Facebook—all done by two guys over the last two years. Their top post is here—https://www.facebook.com/photo.php?fbid=10151541387192909&set=a.258016217908.138579.117609792908&type=1. Once you have that size of movement then you can normalise generous giving by creating a strong supporter network of local meet ups, most people once they are comfortable Give Well is honest will probably be content to just give to the top charities.
The point of departure has to be most people don’t think like Effective Altruists (you conduct mkt research to figure out how they do think and base your strategy on that research) but very many people want a happier fairer world and will be willing to pay generously to work towards achieving that, especially if they are part of a community that endorses and gives status to generous giving.
Good idea. Generals use wargames for the same purposes, which essentially are a kind of simulations. Perhaps some sort of analogous simulations can be constructed.
Another idea is of course to launch a mini version of your idea, if that’s possible. This version has to be sufficiently similar to your main one for this to be a good strategy, however.
I’d like to propose a web application targeted specifically to donors that captures recurring payments for an EA meta-charity and helps them manage the donation:
Easily capture monthly, automatic donations based on some simple donor budgeting with a very streamlined, elegant onboarding process
Funnel these payments to something like the GiveWell fund
Let the donor easily view sum total donations over time, view some simple budgeting, tweak their commitment, manage their payment method, and do tax accounting/reporting.
I think this movement should have a website dedicated solely to onboarding new donors to commit to recurring donations to an EA fund with a user experience that could be comparable to something like Slack. I like the idea of removing the need for the irrational and sporadic decision-making process involved in traditional online giving, and instead deferring to experts and reducing the decision to a single, enlightened commitment. I think the more neutral branding of something like GiveWell should be used.
I am certainly not qualified to create a fund or pick and choose what to give to, so would want to use something like GiveWell (I am also aware of Giving What We Can). My only concern is that GiveWell is relatively narrow in scope so far. I understand this is for good reasons, but opening up to a wider range of causes could lead to massively greater donation volume. The question is whether this increased volume could have a much greater altruistic impact over time, if only because it would warm people up to the idea of committed altruism at a much larger scale. But I am definitely willing to suspend this concern in favor of getting the site implemented.
Abstracting the donation process could be very convenient. I think it is a barrier to ask people to click through and allot funds to multiple orgs via multiple sites and donation processes, and have to gather and organize receipts themselves. I simply think that streamlining all this into a single UX could be a big selling point.
I am myself a web developer, and I think I could build this out fairly quickly, depending on how payments get processed. I want it to be a purely altruistic venture and would need no compensation. I’ve built up a fairly large fundraising infrastructure startup already and have a fair amount of direct experience in these types of apps (I don’t want to list that company here as I don’t want to distract the discussion, and I want this project to be purely altruistic, totally disconnected to any startup ventures).
If there is another project already with these explicit goals, let me know. Otherwise, I’d love to hear any feature requests or concerns around the idea from fellow donors. After some validation, I will go ahead and build an ‘MVP’ of the website sans payment processing and design, and then will try to get in touch with GiveWell or another metacharity for actually processing the funds and working the branding and copy.