Max Dalton and Jonas: How to avoid accidentally having a negative impact with your project

Link post


Below is a transcript of this EA Global: London 2018 talk, which we have lightly edited for clarity.

The Talk

Max: We are effective altruists, and we’re trying to help the world. The reason we pick projects is because the effects that we can see – that is, the first-order effects – are positive. Maybe there are some negative things in there that we know about, but the stuff that we’re seeing is net positive. That’s why we’re doing the projects.

This session is about thinking through some of the less obvious or second-order effects of your projects, and thinking about how to minimize the downsides.

One thing we’ve noticed is that it’s quite easy for any discussion of this issue to be rounded up or rounded down to either “Doing projects is good” or “Doing projects is bad, don’t do projects.” We think the actual answer is somewhere in the middle: It’s nuanced and there are lots of things you should be considering.

Again, we’re going to focus on the negative effects, because they’re more hidden sometimes, but there are also all of these positive effects. We’re not saying “Don’t go and do things.”

Jonas: I would also like to add that these points apply to existing organizations just as much as new organizations. So they could be useful for everyone. As you said, don’t take this as general advice against trying new things, but instead, try to reason your way carefully through ways to have positive and negative impacts. Otherwise, this talk would have had a potentially negative impact, which we don’t want!

Max: One way to approach this is looking at particular types of non-obvious effects. Instead, I wanted to start with a slightly more theoretical way of approaching this problem, which is to think of general theoretical reasons that we might expect the second-order effects to be negative rather than positive or positive rather than negative. A couple of days ago, my answer to this question was “I can’t really see any general effects.” but I’ve actually been talking with a bunch of people at the conference and now I think that there are some principled reasons why we might expect, in general, that second-order effects are more negative than positive.

One reason is the Unilateralist’s Curse. The idea here is: imagine there’s some project which any individual can carry out independently. They don’t need a large group of people to support them, and it’s unclear whether it’s good to do this project or no. Maybe this project is releasing the blueprint to some dangerous device on the internet. Anyone can plug the USB stick into the computer, and maybe five people on a research team have the necessary information, and four of them decide it’s not a good idea, but one person decides it is a good idea, and because they can do this, they end up doing it. This means there’s a bias towards that sort of project happening more than it should. I think that’s a practical reason why you might think that second-order effects are usually negative: there’s a chance for you to have missed them and still take an action, even if most people have seen them and chosen not to act.

Jonas: The solution to this could be to coordinate with the other researchers who have also thought about this, and then, if you learn that your four colleagues think it’s a bad idea to do this, to adjust your impact estimate downwards and decide against doing it. Coordination is the solution to this problem.

Max: One way of solving this, which Nick Bostrom discusses in one of his papers, is to just take a majority vote of the scientists; this works better than a unanimous decision, because that’s too high a bar and then you end up not doing enough things.

Another general reason why you might expect indirect effects to be negative rather than positive is that it seems like effective altruism and some of its related fields are doing unusually well at the moment. If something is going unusually well, and then you do something random in relation to it, then you have regression to the mean. You might expect that on average, doing something random is more likely to pull down an unusually good thing than to push it up. What normally happens for social movements is they’re not as effective or impactful as it seems like EA is becoming. So doing random things could make us more like the average movement, which might pull us down from the quite good place that I think we are in.

Jonas, do you want to talk about some of the positive effects?

Jonas: Yeah. Even though some second-order effects are negative, there are also a lot of positive second-order effects you might have with your project. An important one is skill-building: a lot of us who have started projects in the past have benefited enormously from the experience that came with creating something. I think this one is fairly obvious, but it’s still really important to consider.

Another effect is gaining information about which strategies work well. For example, if the EA movement has many local groups trying different approaches in different places, we gain a lot of information about the most (and least) successful ways to build a community.

Max: Exactly. An additional point to emphasize: information is really valuable because it’s a thing that you can easily share, and that lots of people can use, which can create unexpectedly large effects. It’s not just you who benefits; it’s a whole bunch of people, even the whole community.

Another form of positive indirect effect is that you can boost the reputation of whatevers you’re working on. For example, if you found an effective global poverty charity, you don’t just help people; you also lead people to generally feel more positive about giving to effective charities in the future.

A related but slightly different point is what we call “positive lock-in.” The idea: for some things, it matters who does them first, because they then influence a lot of the debate on that thing. So for instance, if you’re the first person who talks about the safe development of artificial intelligence, your personal reputation may become linked to the idea of AI safety, and also the way in which other people talk about it. In other words, being one of the first people to talk about an idea can have a big effect. If you imagine there’s someone else who would talk about it if you didn’t do so, and if that other person would do a worse job than you, you end up having a big counterfactual impact, in that you’re potentially changing the trajectory of an entire field.

Jonas: Let’s move back to some more negative effects. The most obvious type of negative effect is a first-order effect: you don’t reach your primary goal, and perhaps your project even has the opposite effect. An example that has been used a lot in the EA community is Scared Straight, which is a program which tried to reduce crime. When evaluated, the results showed that it actually increased crime. Unintended consequences are often exacerbated by projects being irreversible. If you just publish a mistaken research finding, you can retract the finding or publish an updated version of it, but some things are much harder to fix.

Max: One thing that’s maybe been talked about quite a lot is the effect of low-fidelity communication. When you try to spread one version of an idea, people might misunderstand you and take away some other idea.

This can happen in a few different ways. You could simply make a shortsighted mistake: For example, if you want to maximize the number of people who take the Giving What We Can Pledge, and write really spammy marketing copy, people might misunderstand what Giving What We Can is really about. But even if you try to communicate more carefully, accidents can still happen; it’s easy to say something correct but have that message become distorted as it passes from person to person. Eventually, your idea might be turned into a much simpler message which isn’t what you wanted people to hear, and might actually be harmful.

I think something like this happened with some of the career advice that EA orgs initially gave, where there was a somewhat simple message which then got simplified even further into something like “Everyone should go and earn to give.” And I think it’s now being simplified in the other direction: “No one should earn to give.” In general, thinking about how your message is likely to be simplified beyond what you’ve said, and trying to work out what simple message people will take away, can be quite useful.

Another issue is attention hazards. I’m not sure how many people in the community know about an organization called Intentional Insights. They wanted to get involved with the EA community, so they did a lot of spammy marketing, and overall, they seemed to be having a negative impact on the community. One of the bigger negative impacts they had before they left EA was a second-order effect, in that their mistakes took a lot of senior staff time from big organizations and people whose opportunity costs were really high. instead of working on their jobs, they were thinking about how to persuade Intentional Insights not to do harmful things.

Jonas: Since the community seems to be management-constrained, doing things that take up a lot of senior management time could be particularly bad.

Max: Another issue is reputation hazards. This is where you represent ideas in a way that leads people to form really negative opinions, harming the reputation of those ideas (and not just your own reputation as the person communicating them). This is the opposite of the reputation building we talked about earlier; talking about ideas can go well or badly, and you need to think carefully to make sure it goes well.

Finally, there are information hazards. Sometimes, releasing information can cause bad things to happen, and it can be hard to know what the effects of releasing information will be.

Jonas: One point on reputation hazards: I think this is particularly important for small projects, because even a small project could potentially lead to a lot of PR that is perceived as representative of the whole EA community, and so that’s potentially a really big lever that even a smaller project could push on.

Now that we’ve talked about those direct costs, our following points will be about opportunity costs.

The first cost is simply taking up resources. If you fundraise or recruit talent from the EA community, it may be the case that these resources could have been used more effectively somewhere else. There are also more subtle costs: for example, growing a field imposes coordination costs. So before you start a project, it makes sense to think about what would happen otherwise, and which other projects might be competing for the same resources.

Of course, the idea shouldn’t be “Don’t ever fundraise from EA funding sources, because the money might be used in some other way.” Instead, you just want to think about these tradeoffs carefully. You might think about whether your project has a chance of being more effective than another new project in the same field, for example.

Max: A bit of a counterpoint there: when you’re asking people to fund your project, or to come and work for you, they are also thinking about whether it’s a good idea, which means you have a bit more of a “sanity check” in that situation. This may reduce the amount you need to worry. I do feel good about the norm that projects should just try to fundraise and hire people.

Jonas: If you’re really persuasive or charismatic, and good at convincing people to support you even when they shouldn’t, you might still worry a bit about this, but I generally agree with Max.

It’s also important to say that some projects have the opportunity to generate new resources for the community. They may draw in a new group of people who then end up contributing to the community in other ways, or create better ways to use existing talent, or help people develop new experience and skills, and so on.

The next point I’ll make is about lock-in effects and first-move effects. If you start a new national group in a country, it becomes much less likely that someone else will come along and do the same thing later. So you should really think about whether, if you don’t do this, the next person to try might do a better job.

The third point I would like to make is about what we’ve termed ‘drift’. The idea is that doing a project might subtly change the EA community in certain ways – both the culture and the focus of what people work on. One example: if several people in EA did a lot of economic modeling to figure out strategies or interventions around EA questions, modeling might become more fashionable in EA and draw in more economists.

Max: One way of making that point: imagine that what people in the EA community work on in 2019 is based partly on work done in 2018. If you run a project in 2018 that is just on the edge of what seems effective or sensible, someone might anchor off your project in 2019 and do something less effective or sensible. If this process keeps going, year after year, you could end up with a community where people work on really ineffective projects. This is what it means for a project to shift the trajectory of the movement.

Jonas: Of course, it could also be possible that you have discovered something really important, such that it’s useful to shift the trajectory of the movement towards that thing. Again, many of these negative effects can also be positive, depending on the project.

Max: It’s also true that as the community grows, we’ll have more ability to specialize and to explore new areas, while also perhaps hitting diminishing returns on older ideas. New projects will be important to help EA “spread out”, which could be a good thing.

Jonas: Wrapping up: these were the main ways in which we think your project could have a positive or negative impact. One more thing worth mentioning is that all of these effects depend on the cause area you’re working in. If we imagine that you’re working on a new project in global development, it may be harder to have a negative impact there, because that’s a fairly established field and it’s unlikely that you’ll harm the field as a whole. In areas with a lot of scientific literature showing what works and what doesn’t, it’s not as easy to accidentally produce these types of negative outcomes. In newer and more uncertain areas, such as AI policy, it’s much easier to do more harm than good.

We’d like to end with a few action points.

First of all: If, having heard this talk, you think that you’re unusually likely to coordinate well with the community, without accidentally overlooking these negative impacts, chances are that you should actually do your project, and not be discouraged by this.

Max: The slightly glib version of this is that if you’re in this room, then that’s a good sign.

Jonas: And then, there are a bunch of things you can do to avoid these negative effects. Just learning a lot about your field, and trying to train important skills before you start a project, is really helpful.

  • a general point of advice is to develop as many skills and relevant knowledge as possible that you can apply to make your project better, and to avoid these negative side effects.

Max: Related to the skill point, you want to find a project that’s a good fit for the skills you’ve already developed. You generally don’t want to think: “This area is the most important one, so even if I don’t have great skills for it, I still want to start a project.” It’s better to do something that you’re a good fit for.

Also, asking for feedback from the community is a good way to work out whether it’s a good idea to do a project, and what things you might be missing. Lots of the negative effects we’ve talked about are things you can navigate around. Rather than running into a binary “do the project or don’t do the project”, you want to think about how you could do the project well, and getting feedback is a good way to do that.

One aspect of this is really listening to the feedback, which is hard. I know from experience that it’s difficult when people tell you that your project is bad, but that doesn’t mean you shouldn’t update on that. It’s fine to take a week or two to process negative feedback before moving on. You also want to make it easy for people to give feedback – not demanding a lot of their time, maybe creating a quick one-page summary of your idea, and so on.

Jonas: And finally, you should actually implement the feedback (if doing so seems wise). I’ve made this mistake myself, and seen others do it: asking for a lot of feedback and then moving ahead with your original plan anyway. Instead, take feedback really seriously, and be open to updating your plans.