What advice do you have for teaching EA courses in an academic context (esp. philosophy)? Besides the Ethics projects, which parts of your classes on the topic do you think are most successful or most popular?
mhendric
Most liberals and libertarians identify with non-consequentialist ethics. Consequentialism is (sometimes?often?) seen as an antagonist or threat to liberalism or libertarianism. Sometimes, I worry that the strong connection of Effective Altruism to consequentialist ethical positions serves as a hindrance in popularizing it among modern liberals and libertarians.
Do you agree with this assessment? Do you think this can change? In what ways would you like to see consequentialists engage with liberal or libertarian ideas? In what ways can we make liberals or libertarians engage more with consequentialist ideas?
“I agree with Lynette Bye that most of the working hours literature is poor—I’m even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. ”
I am worried about relying on anecdotes of top performers as this has an obvious selection effect neglecting the (probably sizeable) group of people that tried stimuant-driven work binges and simply burned out.
This is hand-wavingly addressed later
“A third reason is that burnout risk might be overrated if most of your impact comes from the small chance of you being a very high performer, perhaps because being 99th percentile is 100+ times better than being 90th percentile. This makes studying the habits of top performers even more useful because the survivorship bias is less important.”
First, I think it seems unattractive to me to have EA become a large group of amphetamine-fueled workaholics with high burnout rates—not even because of optics, but because of the immense suffering of those that will burn out.
Secondly, this neglects how many of the high-impact performers would have been high-impact absent amphetamines or excessive working hours.
Third, it strikes me as implausible that the “99th percentile is 100+ times better than being 90th percentile” for the target groups of “operations, entrepreneurship, or community-building ”. I did a tad of community-building myself, and would be very surprised if for a community-builder, adding 20 hours of work a week even approximates the value of the first 40 hours spent on community-building, and honestly shocked if it outsized it by a factor of 100.
Lastly and most importantly, it is entirely unclear to me in what relation the “small chance of being a very high performer” and the “chance of burnout” is. It seems entirely plausible to me that the chance of me becoming Erdos-like because I take stimulants and work a ton is thousands of times less likely than the chance that I’ll burn out because I take stimulants and work a ton.
I also generally think that health-related advice that goes against widely-held priors should at least attempt to quantify risks and benefits using actual numbers, rather than waving hands.
This post is quite informative, but at points seems written in a needlessly harsh tone.
E.g. “If Givewell was serious about welcoming outsiders’ input into what they could do better, they’d work with experts to improve their hiring process. But they’re not a serious organisation, so I suspect they’ll ignore this.”
I read this part as venting some anger after being rejected (in a frustrating way!), which is understandable. But it makes it harder for me to place the post more broadly, as I worry that parts may be similarly exaggerated or that the focus on negative parts may omit other parts that would be needed for a representative picture of the application process.
Still, I found this informative and upvoted. I wanted to mention it as it may explain the voting pattern.
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
Set up whistleblower protection schemes for members of EA organisations
Transparent listing of funding sources on each website of each institution
Detailed and comprehensive conflict of interest reporting in grant giving
Very impressive! Well done!
Interesting post and curriculum. I look forward to hearing about the outcomes of the first run as you evaluate them and get results. My own estimates for the likeliness of this achieving 85% better outcomes than the current method are significantly lower, but I think there’s a chance this will be an improvement.
That being said, some points of disagreement.
I think the framing of “creating” rather than “finding” motivated altruistic individuals does not match my own expectation of dealing with university students. When I was an undergraduate myself, I definitely conceived of myself as altruistic, and was actively looking for a cause/group to become active in, both for the social and ethical aspects of activism. Now that I teach first-year students, I think there is a large group of similar motivated altruistic individuals that seek a group to engage with, and I think there is a lot of value in identifying them and pointing them towards EA as a way to act on their altruistic motivation. Many students I meet then stay in the first group/broad direction they focused their altruism on, making it even more valuable to present them with EA as an option early-on.
I am worried that this group in particular would be somewhat turned off by the flair of the syllabus you present. Young people that are already altruistically motivated may not look to “Understand Themselves”, “Find Meaning” or “Try to live a happy life”. Rather, they may be focused on their altruistic motivation, being angry and upset not at their own life not going well, but the injustice they have become aware of as they grew up (think: Climate Change, Poverty, Racism, Discrimination etc). A syllabus that focuses on one-self, rather than others, may not be a good fit for already altruistic young adults. I myself would likely not have enrolled in this program at an early age, having been a pretty arrogant young man that thought he “figured out” the problems and solutions facing the world, as I was mainly scouting for groups that helped me address problems I found most pressing.
One reason for skepticism about your predictions is that my prior for “intervention induces altruistic motivation” is relatively low after reading about a series of philosophers experimenting with using prompts and/or courses to induce altruistic behaviour, which turned out to be pretty tough! That being said, if this program were to achieve this goal, that would be very impressive and meaningful, so I applaud the effort going into executing and evaluating the program.
I’m confused as to whether the character of the project is (1) An epistemic project to make economics research more accessible and transparent or (2) A political project to promote specific areas of economic research that we believe are not accurately represented in current consensus, possibly in the hope of accelerating economic system change.
This announcement is giving me (1) vibes, whereas the newsletter is giving me (2) vibes.
Personally, I share Harrison’s concerns. I think if the project is (2), these concerns are much more pressing than if the project is (1), as I expect a washout effect as more topics get added to correct for what may be biases of the founders. But based on the website, I am relatively confident that the project is (2) - the website specifies wanting to accelerate a “paradigm shift”, and prominently displays a quote about the problematic nature of western capitalism.
Two give just two examples illustrating my concern with the newsletter.
The graphs that stipulate the badness of the “neoliberal turn” omit the massive economic growth we saw in previous decades, which eradicated a significant fraction of extreme global poverty. It does so by focusing its graphs on the US. But many people believe the main benefit of the “neoliberal turn” was not to people in the US, but to the global poor! A neutral approach to the project would at least highlight the possibility that the neoliberal turn also is seen as having benefited a large number of people outside of the US.
UBI is posited as an alternative concept to the “neoliberal system”. This juxtaposition strikes me as odd—in the American context, neoliberals such as Milton Friedman (plausibly an architect of the “neoliberal turn”) publicly advocated for UBI. A neutral approach to the project would at least highlight that while some neoliberals have advocated for a UBI, the project hasn’t made it off the ground.
I don’t want to be overly critical—I am glad this project exists, and am happy to see more accessible and transparent economic data. But I want to highlight that there may be a significantly higher value if the project takes a neutral approach to economic schools and systems instead of following a line of thought or narrative that the founders (maybe correctly) take to be the right one.
Edited to reflect a closer look at the website.
Great post, and glad to see you find this approach fruitful.
Another example that highlights the distinction you emphasize may be the EA-conference formats and the EA Funconference formats, which are popular in the German community. EA conferences are mostly used to listen to talks and network in 1on1′s, which makes them very valuable. They resemble, to me, academic conferences. EA Funconferences are participant-driven events that set little in terms of agenda and resemble a summer camp more than any formal meetup. I found the German EA Funconferences highly valuable for the reasons you describe: participants were comfortable, actively contributed their own content regardless of the level of seniority, and bonded quite a bit.
Despite no formal focus on careers, networking, or current research, I found these events to be more helpful in terms of networking than EA-conferences, which always feel a bit forced and overly formal to me. That preference may be idiosyncratic, but my impression was that most participants loved the Funconferences I visited. I’d love if EA had more Funconferences to augment the more formal conferences.
I am unsure if other countries organize Funconferences, but if not, I’d highly encourage it. Carolin Basilowski has been involved with organizing them in Germany, and I imagine she’d be happy to share her experiences.
I deeply enjoy your blog. I often grow frustrated with critiques of Effective Altruism for what I perceive as lacking rigor, charitability, and offered alternatives. This is very different from your blog. I think your blog hits a great balance in that I feel like you genuinely engage with the ideas from a well-intended perspective, yet do not hesitate to be critical and cutting when you feel like an issue is not well justified in EA discourse.
I particularly enjoyed the AI risk series and Exaggerating the risks series. I take this to be the areas where, if EA erred, it would be most impactful to spot it early and react, given the amount of funding and talent going into risk mitigation. I would love to read more content on regression to the inscrutable, which I found very insightful. I would also love to read more of your engagement with AI papers and articles.
I’d be interested in whether you or others have favorite critiques of EA that aim for a similar kind of engagement.
Thank you for the recommendations. To be honest, the parts of The good it promises that I read struck me as very low quality and significantly worse than the average EA critique. The authors did not seem to me to engage in good-faith critique, and I found a fair amount of their claims and proposed alternatives outlandish and unconvincing. I also found many of the arguments to be relying on buzzwords rather than actual arguments, which made the book feel a bit like a vicious twitter thread. I read only about half of the book; maybe I focused on the wrong parts.
I will check the GPI working paper series for alternative critiques. Thank you for recommending them.
Two AI papers I’d be particularly interested to see you engage with are
”Concrete Problems in AI Safety”and
“The alignment problem from a deep learning perspective”
On another note, I recently heard an interesting good-faith critique of EA called “But is it altruism?” by Peruzzi&Calderon. It is not published yet, but when it comes out, I could send it to you—it may be an interesting critique to dissect on the blog.
Again, thanks for your work on this blog. It’s really appreciated, and it is impressive you are able to spend so much time thoughtfully reflecting on EA on this blog while being a full-time academic.
But it’ll be intensified if the community mainly exists of people that like the same causes because the filter for membership is cause-centered rather than member-centered.
I feel like this post introduces a helpful contrast.
I am personally partial to the member-first approach. A cause-first approach seems to place a lot of trust into the epistemics of leaders and decision-makers that identify the correct cause. I take this to be an unhealthy strategy generally—I believe a vibrant community of smart, empirically-minded individuals can be trusted to make their own calls, and I think this may often challenge the opinion of leadership or the community at large in a healthy way. Even if many individual calls end up leading to suboptimal individual behaviour, I’d expect the epistemic benefits of a diversity of opinions and thought to outweigh this downside in the long run, even for the centrally boosted causes, which benefit from having their opinions challenged and questioned from people that do not share their views, and having the likelihood of groupthink significantly reduced.
On a more abstract level, I think EA is pretty unique as a community because of its open epistemics, where a variety of views can be pitched and will receive a fair hearing, often leading to positive interventions and initiatives. I worry that a cause-first approach will endanger this and turn EA into “just another” cause-specific organization, even if the selection of the cause is well-motivated at the initial point of choice.
I’m not really seeing a dire need for this proposal. 10% effective donations has brand recognition and is a nice round number, as you point out. It is used by other groups, such as religious groups, making it easy to re-funnel donations to e.g. religious communities to effective charities. This leaves 90% of your income at your disposal, part of which you may spend on fuzzy causes. It does not seem required to me to change the 10% to allow for fuzzy donations, nor do I think there’s a motivation to make donations to fuzzy causes morally required.
Example 1: Someone wants to support a cause dear to their heart that is ineffective, but also recognizes the need for effective charity. Previously, they donated 10% to effective charities and 5% to fuzzy charities. On the new proposal, they donate 8% to effective charities, and 5% to fuzzy charities. This seems to be worse than the initial situation.
Example 2: Someone does not see a specific reason to privilege fuzzy charities. They donate 10% to effective charities. On the new proposal, they donate 8% to effective charities, and 2% to some other charity. This seems to be worse than the initial situation.
Example 3: Someone sometimes gives to inefficient causes for personal reasons. They read your proposal above, feeling happy to see their actions justified from an impartial standpoint for the reasons indicated above. A newspaper asks them why they give to charities they themselves consider inefficient. They say they do public donations to fuzzy causes to improve the reputation of EA/score “reputation points”/send them this post. The newspaper publishes an op-ed on how EA is greenwashing its charity. This seems to be worse than the initial situation.
In my personal life, I do not at all feel hindered to donate to fuzzy causes by the fact that I pledged 10% of my income to effective charities. If a friend starts a fundraiser, or I see a homeless person, or some speculative but cool idea comes up, I gladly shoot them some of my income. This feels good. There is no need to adjust the 10% amount in order to enable me to get my fuzzies from these alternative giving opportunities. At the same time, there are reasons to believe the proposal hurts brand recognition and can lead to worse situations, as indicated in the example.
I appreciate the notification and will take a look!
I enjoyed reading your thoughts on whether the 10% pledge is central to EA’s public perception.
I do not agree with how you relate your positive proposal to the critiques of EA. Two points stuck out to me: the “earning to give” point and the “is 10% the correct amount” point. In both cases, I see no reason to believe “a 2%/8% or 2%/10% fuzzies/utilons standard for an earning to give pledge would be a concrete way to show we’ve taken onboard some of these critiques.”.
Earning to give is weird. You improve the world by becoming a (checks notes) Banker or Lawyer? People that criticize earning to give do not criticize the notion of them donating their money, but typically criticize Banking/Lawyering as a profession where one can do good (e.g. because they believe these jobs are net-negative), or see the pledge as greenwashing one’s otherwise rich life. I do not see how a banker donating 2% to his favorite Opera would change any of these critiques. The critic does not want you to donate to the Opera—they want you to stop saying being a banker may have more positive ethical payoffs than being a social worker.
EA argues for a duty of beneficence and asks members to donate 10%. 10% is an arbitrary shelling point. Why not 11%? Why not 12% (you are here)? But consider: why not 13%? (...) Why not 99%? These worries are a classic critique of duties of beneficence, at least since Singer released Famine, Affluence, and Morality. I am confident that such critiques will not be resolved by setting the donation percentage 2% higher. The critic does not want you to donate 12% - they want you to explain why X% is morally required, but X+1% is not morally required.
I agree with the point that a newcomer to EA may wrongly get the impression of not being allowed to donate to non-effective charities. This would be bad. But I think there are significantly easier ways to signal to them that they can do so than to reform the Giving What We Can Pledge (talking to them/leading by example/putting it in a FAQ).
I also still think your positive proposal would likely be harmful, partly for the same reasons I laid out in a previous post. First, why make fuzzy donations mandatory? Someone with very Utilitarian convictions may be put off by this, or someone who would otherwise donate 10% effectively and donate fuzzies separately may reduce their effective donations while keeping fuzzies constant (this is on the original 8%/2% proposal and does not affect the 10%/2% proposal). When I encountered EA, a pitch of “Donate X% to the most effective ways of improving lives, then spend an additional 2% on whatever you feel like” would have created more rather than less confusion in me. Most people, I reckon, do not need approval to spend the other 90% on things they want to spend them on, including charity that is not effective.
Much more importantly, I think this has a big potential for being a PR disaster, rather than a PR boon. I don’t know how I would explain why my organization has a norm of donating to charities we don’t consider to be effective. I think the reasons you provide are by and large “to improve our reputation”. I am quite confident that EA explicitly foregoing its efficiency principles to mandate a 2% fuzzies tax to improve its reputation would not land well in the press, or with critics. Much of this sounds to me like an attempt at 4d-chessing the public perception of EA. Frankly, even if I were an EA-sympathetic journalist, I would find the idea quite insulting—it’s pretty transparent.
I also agree with Isaac that the initial downvotes and overall vote tally strike me as disagreement with your proposal, rather than a rejection of your discussion.
I remain unconvinced by the suggestion that the benefits of the proposal would outweigh its costs.
Some thoughts, in no particular order
1. “So the 10% figure is not exactly arbitrary. It is chosen for a specific set of practical reasons—having brand recognition, being a nice round number, and being used by religious groups.”
In the comment you refer to, I was arguing that you misunderstood criticisms that target the % of donation EA requires. Specifically, any % is arbitrary on normative grounds: if you have a duty of beneficence towards the global poor, it is unclear why it would discharge at 10%, 12%, 18%, 90% and so on. There are indeed practical reasons to choose 10% over others, but they do not solve the normative problem. Normatively, EA chooses an arbitrary number. Scott Alexander has a decent discussion of this point here: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/
2. If the 10% community standard is all that’s preventing a subset of current pledgers from redirecting 20% of their annual giving to the opera instead of GiveWell, are we really pleased that the 10% community standard is having that effect?”
I do not currently believe the amount of donors that you would sway by offering them the opportunity to donate X% to Givewell and Y% to other charities is large. Notably, this is already possible with the current pledge: you offer some sort of official badge of approval for the Y%.
3. And our critics most often criticize our donation standards as preventing them from donating to secular causes, such as alma maters, political movements, the arts, and so on. Showing them that there is a way to include these interests in their giving, while still saving lives in a way that can be demonstrated with cost-benefit analysis in the manner of Effective Altruism seems to be a promising strategy to me.
Somewhat more meta: I think you fundamentally misunderstand the nature of most criticism levied at the EA. I think most individuals are not criticizing that they won’t be allowed to donate to anything else. Rather, they criticize that EA focuses on specific charities/careers/volunteering over others. For example, they disagree that a banker doing earning to give does more good than a social worker (an example that was central in your last post). Or they disagree that EA says Malaria is more efficient than e.g. the local political fundraiser because the latter provides benefits that are hard to quantify, yet huge. I take it to not be typical that someone thinks “EA is correct in their appraisal, yet I want to have their approval to donate 2% to something that is not effective”. Your proposal is orthogonal to the concerns of critics of EA: you offer them approval to donate to a cause that is explicitly second-rate in the ranking of donations of the pledge. You also don’t offer a reason for EA to believe these donations will do most good outside of EA being able to trick more people into doing a pledge, or the public looking more favorably upon us. This strikes me as a transparent move that will likely backfire significantly.
I’m happy to continue this conversation, but I think a more direct conversation may be more productive than continuing via Forum-Post & Comments. If you want to schedule a zoom call, please reach out!
It is hard to advise on this without knowing your current situation. Both becoming a professor and becoming an influencer are career paths that are not easy to tread, and recommending one over the other will hinge on where you are in life currently—what is your age, educational background, subject interest (important re: professor), marketable hobbies or content niches (important re: influencer) etc.
I cannot speak to the career of influencers, but if you were to opt for taking a shot at becoming a professor, the #1 priority should be to excel academically and take active steps towards getting into an as high-ranked program as you can for your grad studies.
Hey there,
anyone have the link for the economists guesses Askill refers to? I have no copy of doing good better around so I cant check myself.
Also, anyone know if demand-independent subsidies are factored in? I would expect the expected value to be lower when subsidies allow producers to be producing below “production/world market price”, as they could easily export whatever is not locally consumed (as some EU countries do).
Thanks for the post. This issue regularly arises in our local EA group (mainly due to me desperately grasping straws to justify my carnivorous ways), and it is surprisingly hard to get good information on the topic. So far I knew only the “Does Vegetarianism make a difference” post, which is well-written but does seem a bit light on the economics side, with no peer-reviewed articles or analyses being quoted as far as I remember.