While people reading the book changing their altruistic behavior in ways that counterfactually improve the world is one way I see this book being valuable, I think a larger component of its value would be via people better understanding what EA is about, and what EAs are doing and why. As above:
Existing EA writing is also generally aimed at an elite audience. I see why some people have decided to take that approach, but I also think it’s really important to have a presentation of these ideas grounded in common sense. If we ignore the general public we leave EA’s popular conception to be defined by people who don’t understand our ideas very well.
How people who don’t decide to get into EA view EA approaches to the world matters, and I think we’ve been neglecting this. I’m concerned about a growing dynamic where we’re increasingly misunderstood, people who don’t actually substantively disagree with us reflexively counter EA efforts, and people who would find EA ideas helpful don’t engage with them because their osmosis-driven impression of EA is mistaken.
I would state that last paragraph even more strongly: my hypothesis is that the views about EA held by people who will never decide to get into EA will ultimately have a larger effect on how EA impacts the world (both in magnitude and in direction) than the views of people who are already a part of EA communities today.
General-public perception of a group pre-filters the types of people who engage with curiosity toward that group’s ideas, and I think that could be a strong enough force to make my prediction true on its own. I also think that general public reputation for a group can affect the types of opportunities that the group can access for acting upon their ideas, which might particularly shape the actual activities of a community that largely focuses on each person’s highest impact opportunities.
I wonder if you could reduce the opportunity cost by farming out some of the background labor to (for lack of a better term) a research assistant? Seems like that might be a useful investment (depending on funding) to maximize your productivity and minimize time away from your object-level job.
While people reading the book changing their altruistic behavior in ways that counterfactually improve the world is one way I see this book being valuable, I think a larger component of its value would be via people better understanding what EA is about, and what EAs are doing and why. As above:
How people who don’t decide to get into EA view EA approaches to the world matters, and I think we’ve been neglecting this. I’m concerned about a growing dynamic where we’re increasingly misunderstood, people who don’t actually substantively disagree with us reflexively counter EA efforts, and people who would find EA ideas helpful don’t engage with them because their osmosis-driven impression of EA is mistaken.
I would state that last paragraph even more strongly: my hypothesis is that the views about EA held by people who will never decide to get into EA will ultimately have a larger effect on how EA impacts the world (both in magnitude and in direction) than the views of people who are already a part of EA communities today.
General-public perception of a group pre-filters the types of people who engage with curiosity toward that group’s ideas, and I think that could be a strong enough force to make my prediction true on its own. I also think that general public reputation for a group can affect the types of opportunities that the group can access for acting upon their ideas, which might particularly shape the actual activities of a community that largely focuses on each person’s highest impact opportunities.
Yes. The Life You Can Save and Doing Good Better are pretty old. I think it’s natural to write new content to clarify what EA is about.
I wonder if you could reduce the opportunity cost by farming out some of the background labor to (for lack of a better term) a research assistant? Seems like that might be a useful investment (depending on funding) to maximize your productivity and minimize time away from your object-level job.