I didn’t mean to imply that it was hopeless to increase charitable giving in China, rather the opposite, that it’s so bad it can only go up! Besides that, I agree with all your points.
The Chinese government already provides foreign aid in Africa to make it possible to further their interests in the region. I was thinking of how we could possibly get them to expand it. The government seems almost impossible to influence, but perhaps EAs could influence African governments to try to solicit more foreign aid from China? It could have a negative consequence, however, in that receiving more aid from China may make Africa more susceptible to accepting bad trade deals, etc.
I don’t know how to engage with China, but I do strongly feel that it holds huge potential for both altruism and also GCRs, which shouldn’t be ignored. I like CEA’s approach of seeking expertise on China generalist experts. There are a number of existing Western-China think tanks that could be useful to the movement, but I think that a “China czar” for EA is a necessity.
I agree with you. “Effective altruists” are not interested in helping others, only furthering their elite white atheist demographic or showing that they are intellectually and morally superiors as individuals. They will steal my ideas and recommendations because they know they are robust while shunning me because I’m outside of their demographic and using the downvoting system to hide comments that are made by someone outside their demographic.
People use the concept of EA, especially x-risks, as a front for world destruction, their true goal. Literally. Who would suspect that the very same people who are supposedly trying to save the world are themselves the ones looking to destroy it using weapons of mass destruction? They are the most dangerous group in the world.
You’re right, I changed “subsidiary” to “affiliates.”
What is it about public safety that you find so objectionable? I take it you are for enacting the morals of FRI and destroying all life on earth?
Am I the only one who thinks that it should be taboo among EAs to promote how great death is and how important it is to take action on the realization that death is great (in other words, encouraging mass murder)?
I am, aren’t I? I thought so.
CEA pretends to promote action against global catastrophic risks to give itself a good rep while performing their real mission: encouraging cultists to wipe out all life on earth through affiliates like Foundational Research Institute. You are a fraud and the public should boycott you and your pseudo-movement! The “effective altruism” movement is unquestionably the most dangerous and fraudulent group in the world.
I agree that financial incentives/disincentives result in failures (ie. social problems) of all kinds. One of the biggest reasons, as I’m sure you mention at some point in your book, is corruption. ie. the beef/dairy industry pays off environmental NGOs and government to stay quiet about their environmental impact.
But don’t you think that non-financial rewards/punishment also play a large role in impeding social progress, in particular social rewards/punishment? ie. people don’t wear enough to stay warm in the winter because others will tease them for being uncool, people bully others because they are then respected more, etc.
It could be a useful framing. “Optimize” to some people may imply making something already good great, such as making the countries with the highest HDI even better, or helping emerging economies to become high income, rather than helping the more suffering countries to catch up to the happier ones. It could be viewed as helping a happy person become super happy and not a sad person to become happy. I know this narrow form of altruism isn’t your intention, I’m just saying that “optimize” does have this connotation. I personally prefer “maximally benefit/improve the world.” It’s almost the same as your expression but without the make-good-even-better connotation.
I think EA’s have always thought about impact of collective action but it’s just really hard, or even impossible to estimate how your personal efforts will further collective action and compare that to more predictable forms of altruism.
Of course, I totally forgot about the “global catastrophic risk” term! I really like it and it doesn’t only suggest extinction risks. Even its acronym sounds pretty cool. I also really like your “technological risk” suggestion, Rob. Referring to GCR as “Long term future” is a pretty obvious branding tactic by those that prioritize GCRs. It is vague, misleading, and dishonest.
For “far future”/”long term future,” you’re referring to existential risks, right? If so, I would think calling them existential or x-risks would be the most clear and honest term to use. Any systemic change affects the long term such as factory farm reforms, policy change, changes in societal attitudes, medical advances, environmental protection, etc, etc. I therefore don’t feel it’s that honest to refer to x-risks as “long term future.”
I’m sure promoting killer robots will be popular among “effective altruists,”/ISIS, as it is a way to kill as many people as possible while making it look like an accident. “EAs” aren’t fooling anyone about their true intentions.
By regular morals, I mean basic morals such as treating others how you like to be treated, ie. rules that you would be a bad person if you failed to abide by them. While I don’t consider EA superorogatory, neither do I think that not practicing EA makes someone a bad person, thus, I wouldn’t put it in the category of basic morals. (Actually, that is the standard I hold others to, for myself, I would consider it a moral failure if I didn’t practice EA!) I think it actually is important to differentiate between basic and, let’s say, more “advanced” morals because if people think that you consider them immoral, they will hate you. For instance, promoting EA as a basic moral that one is a “bad person” if she doesn’t practice, will just result in backlash from people discovering EA. No one wants to be judged.
The point I was trying to make is that EAs should be aware of moral licensing, which means to give oneself an excuse to be less ethical in one department because you see yourself as being extra-moral in another. If there is a tradeoff between exercising basic morals and doing some high impact EA activity, I would go with the EA (assuming you are not actually creating harm, of course). For instance, I don’t give blood because last time I did I was lightheaded for months. Besides decreasing my quality of life, it would also hurt by ability to do EA. I wouldn’t say giving blood is an act of basic morality, but it still an altruistic action that few people can confidently say they are too important to consider doing. Do you not agree that if doing something good doesn’t prevent you from doing something more high impact, than it would be morally preferable to do it? For instance, treating people with kindness… people shouldn’t stop being kind to others because it won’t result in some high global impact.
I think it may be useful to differentiate between EA and regular morals. I would put donating blood in the latter category. For instance, treating your family well isn’t high impact on the margin, but people should still do it because of basic morals, see what I mean? I don’t think that practicing EA somehow excuses someone from practicing good general morals. I think EA should be in addition to general morals, not replace it.
Perhaps I got it wrong, but I thought that the premise of your position that EA outreach should proportionally represent what people who identify as EAs consider their favourite cause is that EAs (however “effective altruist” is defined) are morally and intellectually superior to the public. I know for a fact that this is the prevailing attitude EAs have. I would really like to know why it is not enough to educate the public on EA-related issues. Why should the public care what is the favourite cause of an upper class 25 year old who donates $500 a year to the same charity since before he learned about effective altruism, discusses computer science concepts with his friends, and denies the reality that people in poor countries are themselves best positioned to solve their problems? How is that person special?
It’s hard for me to imagine a more prejudiced group of people than EAs. You literally hate everyone different from you, ie. people who love God or have a different background or social class. Above all, EAs are extremely racist, denying that people in low income countries themselves have the power to solve their problems and perpetuating the colonial myth that improving the world is the sole realm of privileged white people. Most people in the movement have little empathy for others and are just using it to validate their feelings of superiority and further the dominance of their social class/race. (I am referring to EAs’ attitudes. I don’t mean to suggest that helping others is itself condescending/bad in any way.)
It is the public that should be teaching morals to “EAs”, not the other way around. God bless.
My point was that EAs probably should exclusively promote full-blown EA, because that has a good chance of leading to more uptake of both full-blown and weak EA. Ball’s issue with the effect of people choosing to go part-way after hearing the veg message is that it often leads to more animals being killed due to people replacing beef and pork with chicken. That’s a major impetus for his direct “cut out chicken before pork and beef” message. It doesn’t undermine veganism because chicken-reducers are more likely to continue on towards that lifestyle, probably more so even than someone who went vegetarian right away Vegetarians have a very high drop out rate, but many believe that those who transitioned gradually last longer.
I think that promoting effectively giving 10% of one’s time and/or income (for the gainfully employed) is a good balance between promoting a high impact lifestyle and being rejected due to high demandingness. I don’t think it would be productive to lower the bar on that (ie. By saying cause neutrality is optional).
One thing to keep in mind is that people often (or usually, even) choose the middle ground by themselves. Matt Ball often mentions how this happens in animal rights with people deciding to reduce meat after learning about the merits vegetarianism and mentions that Nobel laureate Herb Simon is known for this realization of people opting for sub-optimal decisions.
Thus, I think that in promoting pure EA, most people will practice weak EA (ie. not cause neutral) on their own accord, so perhaps the best way to proliferate weak EA is by promoting strong EA.
Certainly, no one should be expected to promote things they don’t believe. Which is why if you’re like many in the community, using EA to promote your pre-existing atheist agenda, you should not do outreach, nor call your meetup an “effective altruism” group.
It is your EA community that considers the public stupid, Michael. I completely disagree! Perhaps if your group respected the public more, they might listen to you.
My second point was that the public, being smart, recognizes that the EA community has no moral authority and therefore doesn’t care what their favourite causes are. EAs should thus use logic, not authority, to influence, the public.
I totally understand your concern that the EA movement is misrepresenting itself by not promoting issues proportional to their representation among people in the group. However, I think that the primary consideration in promoting EA should be what will hook people. Very few people in the world care about AI as a social issue, but extreme poverty and injustice are very popular causes that can attract people. I don’t actually think it should matter for outreach what the most popular causes are among community members. Outreach should be based on what is likely to attract the masses to practice EA (without watering it down by promoting low impact causes, of course). Also, I believe it’s possible to be too inclusive of moral theories. Dangerous theories that incite terrorism like Islamic or negative utilitarian extremism should be condemned.
Also, I’m not sure to what extent people in the community even represent people who practice EA. Those are two very different things. You can practice EA, for example by donating a chunk of your income to Oxfam every year, but not have anything to do with others who identify with EA, and you can be a regular at EA meetups and discussing related topics often (ie. a member of the EA community) without donating or doing anything high impact. Perhaps the most popular issues acted on by those who practice EA are different from those discussed by those who like to talk about EA. Being part of the EA community doesn’t give one any moral authority in itself.
I don’t see how TYLCS is selling out at all. They have the same maximizing impact message as other EA groups, just with a more engaging feel that also appeals to emotions (the only driver of action in almost all people).
Matt Ball is more learned and impact-focused than anyone in the animal rights field. One Step for Animals, and the Reducetarian Foundation were formed to save as many animals as possible—complementing, not replacing, vegan advocacy. Far from selling out, One Step and Reducetarian are the exceptions from most in animal rights who have traded their compassion for animals for feelings of superiority.
I really respect the moderators of this forum for allowing me to advocate for public safety (ie. criticize NUE) and removing comments that could endanger public safety (ie. advocating suicide)!
Those radicalization factors you mentioned increase the likelihood for terrorism but are not necessary. Saying that people don’t commit terror from reading philosophical papers and thus those papers are innocent and shouldn’t be criticized is a pretty weak argument. Of course, such papers can influence people. The radicalization process starts with philosophy, so to say that first step doesn’t matter because the subsequent steps aren’t yet publicly apparent shows that you are knowingly trying to allow this form of radicalization to flourish. Although, NUEs do in fact meet the other criteria you mentioned. For instance, I doubt that they have confidence in legitimately influencing policy (ie. convincing the government to burn down all the forests).
FRI and its parent EA Foundation state that they are not philosophy organizations and exists solely to incite action. I agree that terrorism has not in the past been motivated purely by destruction. That is something that atheist extremists who call themselves effective altruists are founding.
I am not a troll. I am concerned about public safety. My city almost burned to ashes last year due to a forest fire, and I don’t want others to have to go through that. Anybody read about all the people in Portugal dying from a forest fire recently? That’s the kind of thing that NUEs are promoting and I’m trying to prevent. If you’re wondering why I don’t elaborate my position on “EAs” promoting terrorism/genocide, it is for two reasons. One, it is self-evident if you read Tomasik and FRI materials (not all of it, but some articles). And two, I can easily cause a negative effect by connecting the dots for those susceptible to the message or giving them destructive ideas they may not have thought of.
Have you considered combining the “GiveWell for impact investing” idea with the Effective Altruism Funds idea and create an EA impact investing biz within your charity? You could hire staff to find the best impact investing opportunities and create a few funds for different risk tolerances. Theoretically, it could pay for itself (or make serious money for CEA if successful enough) with a modest management fee. I’m not sure if charities are allowed to grant to businesses, but I know they can operate their own businesses as long as it’s related to their mission.