Now that a new version of the handbook is out, could you update the ‘More on Effective Altruism’ link? It is quite prominent in the ‘Getting Started’ navigation panel on the right-hand side of the EA Forum.
DM
- DM 10 May 2018 13:22 UTC1 point0 ∶ 0in reply to: CalebW’s comment on: Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift
Thanks for your comment! I agree with everything you have said and like the framing you suggest.
I believe most people who appear to have value “drifted” will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously
This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between ‘value drift’ (i.e. change of internal motivation) and ‘lifestyle drift’ (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise’s comment points out, the term ‘value drift’ is not ideal in the way that Joey and I used it and that:
As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses. (Denise_Melchin comment).
However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).
Thanks for your comment, Karolina!
That also stresses the importance of untapped potential of local groups outside the main EA hubs.
Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!
maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.
This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to ‘changing the world for the better’. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): “If you’re not a socialist at the age of 20 you have no heart. If you’re not a conservative at the age of 40, you have no head”.
More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap
This is a valuable and under-discussed point that I endorse!
- DM 10 May 2018 14:16 UTC3 points0 ∶ 0
Error
The value NIL is not of type SIMPLE-STRING when binding #:USER-ID163
- DM 10 May 2018 19:56 UTC3 points0 ∶ 0in reply to: pmelchor’s comment on: Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift
Great points, thanks for raising them!
It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.
It would be very encouraging if this is a common phenomenon and many people ‘dropping out’ might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:
It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to ‘changing the world for the better’. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): “If you’re not a socialist at the age of 20 you have no heart. If you’re not a conservative at the age of 40, you have no head”.
Regarding your related point:
Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”
I strongly agree with this, which was my motivation to write the post in the first place! I don’t think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never ‘find their way back’ to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I’d generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this—I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).
This is a fantastic project! I encourage other EA university chapters to share the Effective Thesis website on their social media pages and internal groups 1-2x per year. When you share it on Facebook, make sure to mention the Effective Thesis Facebook page on your post.
- ...
I’d guess it is common for people to underweight the expected value (EV) of attending EA Globals, because they focus on the predictable and easy-to-measure benefits of doing so. However, the EV of attending these conferences (according to my intuitive model) is dominated by ‘Black Swan’-like benefits (i.e. low-probability, hard-to-predict, disproportionately-high-impact benefits). For this reason, it may be the case that even if (suppose) most EA Global attendees got little value out of the conference, there will likely be a few individuals reaping very large benefits that justify the whole event for everyone else.
These underappreciated benefits of attending EA Globals likely include: 1) starting a causal chain that will (eventually) result in a job or internship, 2) finding co-founders for highly valuable projects, 3) making new connections (or deepening existing ones) that will (eventually) provide you with substantial support (e.g. financial, advisory, emotional) or vice versa, 4) changing your mind about an empirical or philosophical crucial consideration that radically alters your priorities (e.g. by changing which cause area to focus on, or which interventions to prioritise).
To account for these potential Black Swan-like benefits when thinking about the opportunity cost of attending events such as EA Global, I deliberately attempt to follow the heuristic of asking myself: “Is this event more likely to give rise to Black Swan-like benefits compared to the best alternative use of my time?”. I prioritise events that have ‘Black Swan’-generating circumstances (e.g. meeting new people and organisations working on important topics, having opportunities to reflect on major life choices and philosophical beliefs, meeting smart and well-informed people who have major disagreements with my views).
Framing Effective Altruism as Overcoming Indifference
Daniel Gambacorta has discussed value drift in two episodes of his Global Optimum Podcast (one & two) and recommends the following, which I found really helpful:
“Choose effective altruist endeavors that also grant you selfish benefits. There are a number of standard human motivators. Status, friends, mates, money, fame. When these things are on the line work actually gets done. Without these things it’s a lot harder. If your effective altruism gets you none of the things that you selfishly want, that’s going to make things harder on you. If your plan is to go off into a cave, do something brilliant and never get credit for it, your plan’s fatal flaw is you won’t actually do it. If you can’t get things you selfishly want through effective altruism, you are liable to drift towards values that better enable you to get what you selfishly want. We humans are extremely good at fulfilling selfish goals while being self-deceived about it. With this in mind, you might pick some EA endeavor which is impactful but also gets you some standard things that humans want, because you are a human and you probably want the standard things other humans want. Even if the endeavor that grants you selfish benefits is less impactful in the abstract, this could be outweighed by the chance that you actually do it, and also how much more productive you will be when you work on something that is incentivized. If you do something that grants you significant selfish benefits, you just have to watch out for optimizing for those benefits instead of effective altruism, which would of course defeat the purpose.”
I’m surprised by how much low-hanging fruit there is still left to edit Wikipedia in order to make more people aware of (and provide them with a more sophisticated understanding of) important ideas that are relevant to EA. I’ve been adding and improving Wikipedia content on the side for two years now, with a clear focus on articles that are related to altruism.
In my experience, editing Wikipedia is really i) easy, ii) fun, iii) there are many content gaps left to fill, and iv) it exposes the content you write to a much larger audience (sometimes several orders of magnitude larger) than if you wrote instead for a private blog or the EA Forum. Against this background, I’m surprised that not more knowledgeable EAs contribute to Wikipedia (feel free to reach out to me if you would potentially like to do just that).
A word of caution: the quality control on Wikipedia is fairly strong and it is generally disliked if people make edits that come across as ideologically-motivated marketing rather than as useful information. For this reason, I aspire to genuinely improve the quality of the article with all the edits I make, though my choice of articles to edit is informed by my altruistic values.
A useful resource on this topic is Brian Tomasik’s “The Value of Wikipedia Contributions in Social Sciences”.
[I’m collaborating with Will on creating the content for utilitarianism.net, but this comment is written in my private capacity]
Thanks for writing this up! I really appreciated how you describe the problem of the competitive hiring landscape within the EA community, and especially that you connected this to a potentially increased risk of value drift for community members who grow frustrated after not being hired by their preferred employers within the community. I agree that this presents a major challenge for the EA community as a whole and would like to see more proposed solutions.
Having said all that, I also have two quarrels with your proposed solutions:
First, the EAs in academia who are in the best positions to be able to ‘steer their fields’ in the future are probably the ones who need this type of advice the least, because they would seem to be in the best position to be hired within the EA community. Of course, if they are in such a special position within their academic field, it might be more impactful for them to stay in academia (depending on their field) regardless of whether they could get a job at an EA org.
Second, I have found it difficult to understand from your two points about local EA groups what you wish they would change about their strategy. You advise them to work on “creating a nice and welcoming environment, where members want to come back to in regular intervals for years”. However this seems like standard local group advice to me that most (all?) local groups aspire to implement anyway. (Note that this advice anyway does not really apply to EA university groups which by their very nature mostly attract students on a fairly short-term basis (~ 1-3 years).
I would be interested in your specific recommendations for how local groups could achieve this goal of long-term member engagement. Thanks!
Application Process for the 2019 Charity Entrepreneurship Incubation Program
I like the general thrust of your argument and would like to point out that within moral philosophy there is already an (in my view) satisfactory way to incorporate judgements associated with deontology and virtue ethics within a utilitarian framework—by going from “single-level utilitarianism” to “multi-level utilitarianism“:
I’m currently writing a text on this topic and will copy an excerpt here:
“Utilitarians believe that their moral theory is the appropriate standard of moral rightness, in that it specifies what makes an act (or rule, policy, etc) right or wrong. However, as Henry Sidgwick noted, “it is not necessary that the end which gives the criterion of rightness should always be the end at which we consciously aim”.
Most, if not all, utilitarians discourage the use of utilitarianism as a decision procedure to guide all their everyday actions. Using utilitarianism as a decision procedure means always calculating the expected consequences of our day-to-day actions in an attempt to deliberately try to promote overall wellbeing. For example, we might pick what breakfast cereal to buy at the grocery store by trying to determine which one best contributes to overall wellbeing. To try and do so would be to follow single-level utilitarianism, which treats the utilitarian theory as both a standard of moral rightness and a decision procedure. But using such a decision procedure for all our decisions is a bad and fruitless idea, which explains why almost no one ever defended it. Jeremy Bentham rejected it, writing that “it is not to be expected that this process [of calculating expected consequences] should be strictly pursued previously to every moral judgment.” Deliberately calculating the expected consequences of our actions is error-prone and takes a lot of time. Thus, we have reason to think that following single-level utilitarianism would itself not lead to the best consequences, which is why the theory is often criticized as “self-defeating”.
For these reasons, many advocates of utilitarianism have instead argued for multi-level utilitarianism, which is defined as follows:
Multi-level utilitarianism is the view that, in most situations, individuals should follow tried-and-tested heuristics rather than trying to calculate which action will produce the most wellbeing.
Multi-level utilitarianism implies that we should, under most circumstances, follow a set of simple moral heuristics—do not lie, steal, kill etc.—knowing that this will lead to the best outcomes overall. To this end, we should use the commonsense moral norms and laws of our society as rules of thumb to guide our actions. Following these norms and laws will save time and usually lead to good outcomes, in part because they are based on society’s experience of what promotes individual wellbeing. The fact that honesty, integrity, keeping promises and sticking to the law have generally good consequences explains why in practice utilitarians value such things highly and use them to guide their everyday actions.”
Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism
Thank you for your comment!
There is a part of me which dislikes you presenting utilitarianism which includes animals as the standard form of utilitarianism. (...) I’d prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)?
My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities’ comment). I’m not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we’ll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.
Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you’re linking it to effective altruism websites, or use effective altruism examples?
In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don’t currently see the need to e.g. add a disclaimer when we link to GiveWell’s website on Utilitarianism.net, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. “Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.”)
Also, what’s up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?
We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that “This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion.” For what it’s worth, Harsanyi’s version of the argument seems more plausible than Rawls’ version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that “I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit.”
The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don’t, what’s the point?).
Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.
For more information on this topic, I recommend Amanda Askell’s EA Forum post “Act utilitarianism: criterion of rightness vs. decision procedure”.
Brief meta comment: I would generally recommend being very cautious about (and mostly avoid) using language like “converting” others to EA, as in your sentence “Younger people might be easier to convert (...)”. This type of language seems fairly easy to avoid, whiled using it may make many people feel uncomfortable and even pose reputational risks for the community.
Hi Nil, thanks for linking to utilitarianism.net. Unfortunately, the website is temporarily unavailable under the .net domain due to a technical problem. You can, however, still access the full website via this link: https://utilitarianism.squarespace.com/
I just did this and can attest to it working and being as easy as described in the post. Thanks a lot for the recommendation!
In light of the recently published 2nd edition of the EA Handbook, could this page be updated as well? The ‘more on effective altruism’ link in the navigation menu is quite prominent and it would be great to lead visitors to the most up-to-date content.