General anti-aging research is arguably more effective than trying to cure any single disease, because once your body begins to decline due to advanced age, that makes you more susceptible to basically all diseases and injuries. Successful anti-aging treatments would thus act as a general, massive health boost to everyone past a certain age.
Kaj_Sotala
Excellent! The new comment highlighting makes this forum much more readable.
One thing that I’d like to see here, and have wished for LW for a long time, would be an option to sort threads by the most recent posting—so that commenting in those threads would “bump” the thread to the top, like it does on ordinary forums. People haven’t been very enthusiastic about this proposal on LW for whatever reason, but the lack of that feature contributes to what I feel is the largest problem of the site—that valuable and semi-active threads get quickly buried below more recent ones, so that e.g. new Open Threads need to be continually reposted rather than the old ones organically rising to the top when they have new activity. This also disincentivizes people to comment in older threads, since their comments won’t be seen by as many.
The standard objection to why this feature isn’t needed is that a lot of people follow the “all comments” section of the site that also shows comments to old threads, and it’s true that sometimes this allows there to be new discussion in some old thread. But I still feel that the amount of people who follow “all comments” is much smaller than the amount of people who read the site by more “normal” means, and that the psychological disincentivizing effect persists even if some people do read “all comments”.
Agreed—and there are plenty of ways for people to contribute to EA besides donating. Writing articles, helping organize EA events, and offering support and encouragement to people who are working on more direct things are just the three first things that come to mind.
Any large group working on something needs both people working directly on things, and people who are in support roles and take care of the day-to-day needs of the organization. The notion that all EAs should be working directly on something (I’m counting earning-to-give as “working directly on something”, here) seems clearly wrong.
A lot of people already want to make more money, and they feel a conflict between trying their best to become successful VS using the resources / leverage they already have to help others.
True, but a lot of people are also struggling just to find a job that would be both enjoyable and provide a sufficient wage to pay the bills. Emphasizing making more money could cause them to feel a conflict between finding a job that doesn’t feel soul-crushing VS feeling guilty about being unable to donate much. (Full disclosure: I feel a bit of this, since the career path that I’m currently considering the most isn’t one that I’d expect to make a lot of money.)
I don’t want to tell anyone that they should care about helping as many people as possible. I want to tell them that they have a fantastic, exciting opportunity to help lots of people and have a big impact on the world, if they want to.
Someone who is struggling to find a meaningful job might also be someone who’s struggling to find some purpose for their life in general. (This has been true for me.) That might make them exceptionally receptive to a cause that does offer such a purpose.
Yesterday, me and two others ran the first bigger and more broadly marketed introductory EA event in Helsinki, Finland. We advertised it mainly on Facebook and on the mailing lists of a few student groups. Around 30 people showed up in total, many if not most of them new to EA.
The event consisted of two parts: an introductory lecture, where I compared PlayPumps with Deworm the World (borrowing the story from Will MacAskill’s upcoming book) to help drive home the concept of EA and illustrate what the slogan “effective altruism combines the head and the heart” really means. Then I said a few words about different EA organizations, about how it’s not just about charitable organizations but also things like 80,000 hours evaluating how to directly make the biggest impact on your life, and how I thought that EA is a really exciting idea.
The talk seemed to be well received and I got positive feedback of it later on. Then we ran a Giving Game, with my co-organizers having contributed 200 euros and The Life You Can Save sponsoring us by 5 euros per participant. I had picked Schistosomiasis Control Initiative, GiveDirectly, and Fistula Foundation as the three organizations to compare, on the basis of TLYCS having ready materials for them and them being sufficiently similar but also sufficiently different to make it meaningful and interesting to compare between them.
People seemed to find this an interesting question and quickly started talking about it. We used a format that had proven successful in our Less Wrong meetups, where we first told people to form groups of three or four, and then after some time asked them to new form new groups of a similar size to get new perspectives. In this case, it meant 15 minutes of discussion for the first groups, reform, and then another 15 minutes of discussion. After that we had people make their decisions and pick their favorite charity by filling an e-mail form on one of the two laptops brought by the organizers; they could optionally also give us their e-mail address if they wanted to stay in touch. People who were short on time could also leave earlier and make their decision as they left (we had placed one of the laptops by the door).
Another organizer has the exact figures, but as I recall, the results of the Giving Game had SCI as by far the most popular pick (around 80% support), with GiveDirectly the second (around 15%) and Fistula Foundation coming in third.
Altogether I’m quite happy with the event, and looking forward to organizing more local EA activities.
- EA Finland: from a philosophy discussion club to a national organization by 14 Jan 2023 10:49 UTC; 34 points) (
- The Value of Those in Effective Altruism by 17 Feb 2016 0:59 UTC; 18 points) (LessWrong;
- 11 Oct 2014 16:27 UTC; 14 points) 's comment on Improving the World by (LessWrong;
- The Value of Those in Effective Altruism by 17 Feb 2016 0:54 UTC; 8 points) (
- 20 Jan 2016 17:51 UTC; 5 points) 's comment on Celebrating All Who Are in Effective Altruism by (
Yeah, I thought about that one too, but figured that “just write about anything EA you’ve done, big or small” would be a useful lower bar than “the most awesome altruistic thing you’ve done lately”.
I’m thinking about what kinds of material newcomers to EA should be exposed to. What are some of the basic conceptual tools that are useful for thinking about EA, and evaluating the effectiveness of different interventions/career paths/charities?
I’m thinking about stuff like:
Basic economic concepts: expected utility, opportunity cost, fungibility, various marginal concepts (e.g. marginal cost, marginal usefulness), diminishing returns.
Scientific concepts: control groups, randomized controlled trials.
Well-being-related concepts: quality-adjusted life years.
Heuristics and biases: scope neglect, motivated cognition, confirmation bias, affect heuristic.
What else?
Thanks, this is good advice.
System 1 and System 2 in applied rationality: people often have low motivation if their intuitions conflict with their analysis. If you’re pretty sure something is correct, it’s good to support it with emotional drivers like friendship or chocolate.
Signalling: if you’re trying to model why people do good, a lot of it can be explained by assuming that they are trying to make themselves good. It seems like it might be a major driver of charitable behaviour.
One additional piece of advice that I might mention relating to these two points: it’s fine to act out of selfish motives. If you realize that you’re actually working on some altruistic project because you want to gain status, get social approval, make a good impression on people of your preferred sex(es) - great! If those motives cause you to work harder on worthwhile projects, then there’s no point in beating yourself up for being human and caring about yourself as well. Just be honest to yourself about your motives, whatever they are.
Fantastic!
(Replied more privately.)
Back when I was involved with party politics, I heard someone mention that pensioners with basically unlimited free time were a really major asset for the campaigns of the older and more established parties.
I agree with the general gist of the post, but I would point out that different groups consider different things weird, and have differing opinions about what weirdness is a bad thing.
To use your “a guy wearing a dress in public” example—I do this occasionally, and gauging from the reactions I’ve seen so far, it seems to earn me points among the liberal, socially progressive crowd. My general opinions and values are such that this is the group that would already be the most likely to listen to me, while the people who are turned off by such a thing would be disinclined to listen to me anyway.
I would thus suggest, not trying to limit your weirdness, but rather choosing a target audience and only limiting the kind of weirdness that this group would consider freakish or negative, while being less concerned by the kind of weirdness that your target audience considers positive. Weirdness that’s considered positive by your target audience may even help your case.
I think one of the major problems with this proposal is that nobody actually does it.
I spent a few months doing this, so that if I spent X euros on animal products, I would donate X euros to animal welfare charities at the end of the month.
I plan to resume doing so once my monetary situation looks better (also making a bigger one-off donation to “pay off” the time during which I didn’t maintain that practice).
I don’t know of anyone who has actually tested this at all.
I downgraded from full vegetarianism (and an attempt at full veganism) due to the amount of willpower and occasional well-being it was costing me, especially when battling with depression at the same time.
I once did a birthday fundraiser that allowed people to choose between three targets: MIRI, GiveDirectly, and Mercy for Animals. I mostly wanted MIRI to get the money, but was concerned about the weirdness angle. So I said that people were free to indicate which of the three they wanted to donate to, and that any donations which didn’t explicitly name a target would go to MIRI.
The final donation breakdown was
MIRI: $420,34
GiveDirectly: $52,33
Mercy for Animals: $27,33A bunch of the donors included relatively “mundane” friends of mine, rather than committed MIRI sympathizers and supporters. Given that, I’m inclined to interpret these results as suggesting that most people, if they were inclined to give to my fundraiser at all, didn’t really care about the weirdness of the default recipient enough to even bother specifying an alternate recipient.
I don’t see a mention of when the EAGx events will eventually be. If one does sign up and gets approved, how long of a commitment is this?
Many people choose to avoid engaging with the movement due to the general unspoken feeling of “you’re not doing enough unless you meet our high expectations”—in fact, one commented exactly this in response to this post.
Datapoint—I too have felt unsure whether I’m doing enough to justifiedly call myself EA. (I have both worked for and donated to MIRI, ran a birthday fundraiser for EA causes, organized an introductory EA event where I was the main speaker, and organized a few EA meetups. But my regular donations are pretty tiny and I’m not sure of how much impact the-stuff-that-I’ve-done-so-far will have in the end, so I still have occasional emotional doubts about claiming the label.)
- The Value of Those in Effective Altruism by 17 Feb 2016 0:59 UTC; 18 points) (LessWrong;
- Announcing “Everyday Heroes of Effective Giving” Series by 14 Jun 2016 15:30 UTC; 13 points) (
- “Everyday Heroes of Effective Giving”: Catherine Low, Jo Duyvestyn, Peter Livingstone by 24 Jul 2016 20:44 UTC; 12 points) (
- The Value of Those in Effective Altruism by 17 Feb 2016 0:54 UTC; 8 points) (
- 21 Jan 2016 19:36 UTC; 6 points) 's comment on Against segregating EAs by (
- 21 Jan 2016 19:10 UTC; 5 points) 's comment on Celebrating All Who Are in Effective Altruism by (
- 21 Jan 2016 2:37 UTC; 1 point) 's comment on Celebrating All Who Are in Effective Altruism by (LessWrong;
- [EA relevant] Announcing “Everyday Heroes of Effective Giving” Series by 14 Jun 2016 16:21 UTC; -4 points) (LessWrong;
I suspect as a group grows, formation of some kind of hierarchy is basically inevitable. Jockeying for status is a very deep human behavior. I expect groups that explicitly disclaim hierarchy to have a de facto hierarchy of some sort or another.
Relevant essay: The Tyranny of Structurelessness
Contrary to what we would like to believe, there is no such thing as a structureless group. Any group of people of whatever nature that comes together for any length of time for any purpose will inevitably structure itself in some fashion [...]
For everyone to have the opportunity to be involved in a given group and to participate in its activities the structure must be explicit, not implicit. The rules of decision-making must be open and available to everyone, and this can happen only if they are formalized. This is not to say that formalization of a structure of a group will destroy the informal structure. It usually doesn’t. But it does hinder the informal structure from having predominant control and make available some means of attacking it if the people involved are not at least responsible to the needs of the group at large. “Structurelessness” is organizationally impossible. We cannot decide whether to have a structured or structureless group, only whether or not to have a formally structured one. [...]
Elites are nothing more, and nothing less, than groups of friends who also happen to participate in the same political activities. They would probably maintain their friendship whether or not they were involved in political activities; they would probably be involved in political activities whether or not they maintained their friendships. It is the coincidence of these two phenomena which creates elites in any group and makes them so difficult to break.
These friendship groups function as networks of communication outside any regular channels for such communication that may have been set up by a group. If no channels are set up, they function as the only networks of communication. Because people are friends, because they usually share the same values and orientations, because they talk to each other socially and consult with each other when common decisions have to be made, the people involved in these networks have more power in the group than those who don’t. And it is a rare group that does not establish some informal networks of communication through the friends that are made in it. [...]
Once the informal patterns are formed they act to maintain themselves, and one of the most successful tactics of maintenance is to continuously recruit new people who “fit in.” One joins such an elite much the same way one pledges a sorority. If perceived as a potential addition, one is “rushed” by the members of the informal structure and eventually either dropped or initiated. If the sorority is not politically aware enough to actively engage in this process itself it can be started by the outsider pretty much the same way one joins any private club. Find a sponsor, i.e., pick some member of the elite who appears to be well respected within it, and actively cultivate that person’s friendship. Eventually, she will most likely bring you into the inner circle.
All of these procedures take time. So if one works full time or has a similar major commitment, it is usually impossible to join simply because there are not enough hours left to go to all the meetings and cultivate the personal relationship necessary to have a voice in the decision-making. That is why formal structures of decision making are a boon to the overworked person. Having an established process for decision-making ensures that everyone can participate in it to some extent.”
Fantastic post! Thank you very much for writing it.
Personally I’d add the Foundational Research Institute, which has released a few AI safety-related papers in the last year:
Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention
How Feasible is the Rapid Development of Artificial Superintelligence?
As well as a bunch of draft blog posts that will eventually be incorporated into a strategy paper trying to chart various possibilities for AI risk, somewhat similar to GCRI’s “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” which you mentioned in your post.
Oh, sure. I figured it’d be obvious enough from the links that it wouldn’t need to be mentioned explicitly, but yeah, I work for FRI.
I’m Kaj. I’m currently doing my Computer Science MSc at the University of Helsinki. Currently I’m focusing on my thesis, which is on the topic of educational games and involves me trying to develop a game to teach Bayesian reasoning. I’m also helping get EA Finland on its feet.
I’ve previously dabbled in a bunch of things that were broadly EA in spirit. I was one of the co-founders of the Finnish Pirate Party at a time when it looked like it could improve the state of the country’s legislation with relative little effort, and later on spent a year working for the Machine Intelligence Research Institute. I’ve also posted a lot on LessWrong.