I see some resonance between the behavioral science “Mistakes” that you think EAs might be making and differences that I find in my approach to EA work compared to what seems to be documented in the EA literature.
Specifically, I was recently reading more thoroughly the works of Peter Singer, (specifically The Life You Can Save and The Most Good That You Can Do), and while I appreciated the arguments that were being made, I did not feel as though they reflected or properly respect the real beliefs and motivations of the friends and family that donate to support my EA activities.
In this sense I also see a set of behavior science mistakes that the EA movement seems to be making from my particular individual perspective.
So in my 30 years of doing Africa-focused, quantitatively-oriented development projects, I have developed a different “Theory of Change” for how my personal EA activities can have impact.
This personal EA-like Theory of Change has three key elements: (1) Attracting non-EA donors for non-EA reasons to support EA Global Health and Welfare (GHW) causes, (2) Focusing on innovation to increase EA GHW impact leverage to 100:1, and (3) Cooperating with other EAs with the assumption that each member of the EA community has a different set of beliefs and an individual agenda. Cooperation serves aligned and community interests.
I would appreciate it if you might comment on whether some of my divergences from general EA practice might address some of the “behavioral science” issues that you have identified.
The first element of my Theory of Change is that for my EA causes to be successful, my projects have to be able to attract mostly non-EA donors. I recognize that my EA-type views are the views of a relatively tiny minority in our larger society. Therefore, I do not personally try to change people’s moral philosophy which seems to be Peter Singer’s approach. When I do make arguments for people to modify their moral philosophy, I find that people usually find this to be either threatening or offensive.
Element #1: While the vast majority donors do not donate based on “maximum quantitative cost-effectiveness,” they do respond to respectful arguments that a particular cause or charity that you are working on is more important and impactful than other causes and charities. When “maximum quantitative cost-effectiveness” is a reason for someone that they know and respect to dedicate their life effort and money to a cause, many people will be willing to join and support that person’s commitment. So while only a few people may be motivated by EA philosophical arguments, many more people can support the movement if people that they like, know and respect show a strong commitment to the movement.
This convinces people to support EA causes because they see that EAs are honest, dedicated, and committed people that they can trust. You do not have to convince people of EA philosophy to have people donate to EA cause/efforts. Most people who donate to EA causes could potentially have strong philosophical disagreements with the EA movement.
The second element of my Theory of Change is that EA projects need to have very large amounts of impact leverage. So it is important to constantly improve the impact leverage of EA projects. Statistics on charitable donations indicates, that most people donate only a few percent of their income to charity, and may donate less than 1% of income to international charitable causes.
Element #2: If people are going to donate less than 1% of income to international charitable causes, then in order to try to address the consequences of international economic inequality, EA Global Health and Welfare charities should strive to have 100:1 leverage or impact. That is, $1 of charitable donation should produce $100 of benefit for people in need. In that way, it may be possible to create an egalitarian world over the long term in spite of the fact that people may be willing to give only 1% of their income on average to international charitable causes.
In my little efforts, I think I have gotten to 20:1 impact leverage. I hope I can demonstrate something closer to 50:1 impact leverage in a year or two.
The third element of my EA theory of change is that I assume that every EA has a different personal agenda that is set by their personal history and circumstances. It is my role to modify that agenda, only if someone is open to change.
Element #3: Everyone in the EA movement has a different personal agenda and different needs and goals. Therefore my goal in interacting with other EAs is to help them realize their full potential as an EA community participant on their terms. Now since, I have my own personal views and agenda, generally I will help the agenda of others when it is low cost to my work or when it also make a contribution to my personal EA agenda (i.e. encouraging EA GHW projects to have 100:1 impact leverage). But if I can keep my EA agenda general enough, then there should be lots of alignment between my EA agenda and the agenda/interests of other EAs and I can be part of a substantial circle of associates that are mutually supportive.
Now this Theory of Change or Theory of Impact is to some extent assuming fairly minimal behavior change. It assumes that most people support EA causes for their own reasons. And it also assumes that people will not change their charitable donation behavior very much. It puts most of the onus of change on a fairly small EA community that achieves the technical accomplishment of attaining 100:1 impact leverage.
Does this approach avoid the mistakes that you mention, while at the same time making a minimal impact on changing behavior???
Just curious. I hope this response to your presentation of EA behavior science “Mistakes” is useful to you.
Thanks so much for this comment, Robert—I appreciate the engagement.
It’s interesting to hear what mistakes you see, and what you’ve experienced as working better.
It sounds like you’re really considering who your audience is – something that I think is crucial. For example, you don’t assume that people (especially those not involved in EA) will be sold by more philosophical arguments. These arguments can work for some, but definitely not everyone. I also agree that having a positive reputation (e.g., being seen as credible and honest) can attract people. Plus, it sounds like you’re cultivating some supportive and cooperative relationships with others which is fantastic.
I think I have a slightly different take on the role of behaviour in your theory of change – I still see it as being quite central. To me, the impact we have always comes back to behaviour. You may not be using the more philosophical arguments to encourage donations, but it sounds like you’re still trying to get people to support the movement (which can involve some level of behaviour) by setting a positive example—a different technique. I also think that getting the charities themselves to be more impactful (Element #2 of your framework) also involves some important behaviour change elements. E.g., The charities need to be aware that they could increase their impact, be motivated to do it, have the resources to do it and so on. Definitely open to hearing push back on any of that!
It sounds like you’ve thought through your approach a lot, Robert—thanks again for sharing.
Thanks Emily:
I see some resonance between the behavioral science “Mistakes” that you think EAs might be making and differences that I find in my approach to EA work compared to what seems to be documented in the EA literature.
Specifically, I was recently reading more thoroughly the works of Peter Singer, (specifically The Life You Can Save and The Most Good That You Can Do), and while I appreciated the arguments that were being made, I did not feel as though they reflected or properly respect the real beliefs and motivations of the friends and family that donate to support my EA activities.
In this sense I also see a set of behavior science mistakes that the EA movement seems to be making from my particular individual perspective.
So in my 30 years of doing Africa-focused, quantitatively-oriented development projects, I have developed a different “Theory of Change” for how my personal EA activities can have impact.
This personal EA-like Theory of Change has three key elements: (1) Attracting non-EA donors for non-EA reasons to support EA Global Health and Welfare (GHW) causes, (2) Focusing on innovation to increase EA GHW impact leverage to 100:1, and (3) Cooperating with other EAs with the assumption that each member of the EA community has a different set of beliefs and an individual agenda. Cooperation serves aligned and community interests.
I would appreciate it if you might comment on whether some of my divergences from general EA practice might address some of the “behavioral science” issues that you have identified.
The first element of my Theory of Change is that for my EA causes to be successful, my projects have to be able to attract mostly non-EA donors. I recognize that my EA-type views are the views of a relatively tiny minority in our larger society. Therefore, I do not personally try to change people’s moral philosophy which seems to be Peter Singer’s approach. When I do make arguments for people to modify their moral philosophy, I find that people usually find this to be either threatening or offensive.
Element #1: While the vast majority donors do not donate based on “maximum quantitative cost-effectiveness,” they do respond to respectful arguments that a particular cause or charity that you are working on is more important and impactful than other causes and charities. When “maximum quantitative cost-effectiveness” is a reason for someone that they know and respect to dedicate their life effort and money to a cause, many people will be willing to join and support that person’s commitment. So while only a few people may be motivated by EA philosophical arguments, many more people can support the movement if people that they like, know and respect show a strong commitment to the movement.
This convinces people to support EA causes because they see that EAs are honest, dedicated, and committed people that they can trust. You do not have to convince people of EA philosophy to have people donate to EA cause/efforts. Most people who donate to EA causes could potentially have strong philosophical disagreements with the EA movement.
The second element of my Theory of Change is that EA projects need to have very large amounts of impact leverage. So it is important to constantly improve the impact leverage of EA projects. Statistics on charitable donations indicates, that most people donate only a few percent of their income to charity, and may donate less than 1% of income to international charitable causes.
Element #2: If people are going to donate less than 1% of income to international charitable causes, then in order to try to address the consequences of international economic inequality, EA Global Health and Welfare charities should strive to have 100:1 leverage or impact. That is, $1 of charitable donation should produce $100 of benefit for people in need. In that way, it may be possible to create an egalitarian world over the long term in spite of the fact that people may be willing to give only 1% of their income on average to international charitable causes.
In my little efforts, I think I have gotten to 20:1 impact leverage. I hope I can demonstrate something closer to 50:1 impact leverage in a year or two.
The third element of my EA theory of change is that I assume that every EA has a different personal agenda that is set by their personal history and circumstances. It is my role to modify that agenda, only if someone is open to change.
Element #3: Everyone in the EA movement has a different personal agenda and different needs and goals. Therefore my goal in interacting with other EAs is to help them realize their full potential as an EA community participant on their terms. Now since, I have my own personal views and agenda, generally I will help the agenda of others when it is low cost to my work or when it also make a contribution to my personal EA agenda (i.e. encouraging EA GHW projects to have 100:1 impact leverage). But if I can keep my EA agenda general enough, then there should be lots of alignment between my EA agenda and the agenda/interests of other EAs and I can be part of a substantial circle of associates that are mutually supportive.
Now this Theory of Change or Theory of Impact is to some extent assuming fairly minimal behavior change. It assumes that most people support EA causes for their own reasons. And it also assumes that people will not change their charitable donation behavior very much. It puts most of the onus of change on a fairly small EA community that achieves the technical accomplishment of attaining 100:1 impact leverage.
Does this approach avoid the mistakes that you mention, while at the same time making a minimal impact on changing behavior???
Just curious. I hope this response to your presentation of EA behavior science “Mistakes” is useful to you.
Thanks so much for this comment, Robert—I appreciate the engagement.
It’s interesting to hear what mistakes you see, and what you’ve experienced as working better.
It sounds like you’re really considering who your audience is – something that I think is crucial. For example, you don’t assume that people (especially those not involved in EA) will be sold by more philosophical arguments. These arguments can work for some, but definitely not everyone. I also agree that having a positive reputation (e.g., being seen as credible and honest) can attract people. Plus, it sounds like you’re cultivating some supportive and cooperative relationships with others which is fantastic.
I think I have a slightly different take on the role of behaviour in your theory of change – I still see it as being quite central. To me, the impact we have always comes back to behaviour. You may not be using the more philosophical arguments to encourage donations, but it sounds like you’re still trying to get people to support the movement (which can involve some level of behaviour) by setting a positive example—a different technique. I also think that getting the charities themselves to be more impactful (Element #2 of your framework) also involves some important behaviour change elements. E.g., The charities need to be aware that they could increase their impact, be motivated to do it, have the resources to do it and so on. Definitely open to hearing push back on any of that!
It sounds like you’ve thought through your approach a lot, Robert—thanks again for sharing.