Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I’m more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight
I don’t think risk of this type is given too much weight now. In my model, considerations like this got at some point in the past rounded of to some over-simplified meme like “do not start projects, they fail and it is dangerous”. This is wrong and led to some counterfactual value getting lost.
This was to some extent reaction to the previous mood, which was more like “bring in new people; seed groups; start projects; grow everything”. Which was also problematic.
In my view we are looking at something like pendulum swings, where we were somewhere at the extreme position of not many projects started recently, but the momentum is in direction of more projects, and the second derivative is high. So I expect many projects will actually get started. In such situation the important thing is to start good projects, and avoid anti-unicorns.
IMO the risk was maybe given too much weight before, but is given too little weight now, by many people. Just look at many of the recent discussions, where security mindset seem rare, and many want to move fast forward.
Discussing specific examples seems very tricky—I can probably come up with a list of maybe 10 projects or actions which come with large downside/risks, but I would expect listing them would not be that useful and can cause controversy.
Few hypothetical examples
influencing mayor international regulatory organisation in a way leading to creating some sort of “AI safety certification” in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding
creating a highly distorted version of effective altruism in a mayor country e.g. by bad public outreach
coordinating effective altruism community in a way which leads to increased tension and possibly splits in the community
producing and releasing some infohazard research
influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice
A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.
I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman’s post is, and how many people gave input/feedback—I’d guess that’s several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I’d guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I’d say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
The above-linked 80k article and EAG talk mention a lot of potential examples. I’m not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.
Do we have examples of this? I mean, there are obviously wrong examples like socialist countries, but I’m more interested in examples of the types of EA projects we would expect to see causing harm. I tend to think the risk of this type of harm is given too much weight
I don’t think risk of this type is given too much weight now. In my model, considerations like this got at some point in the past rounded of to some over-simplified meme like “do not start projects, they fail and it is dangerous”. This is wrong and led to some counterfactual value getting lost.
This was to some extent reaction to the previous mood, which was more like “bring in new people; seed groups; start projects; grow everything”. Which was also problematic.
In my view we are looking at something like pendulum swings, where we were somewhere at the extreme position of not many projects started recently, but the momentum is in direction of more projects, and the second derivative is high. So I expect many projects will actually get started. In such situation the important thing is to start good projects, and avoid anti-unicorns.
IMO the risk was maybe given too much weight before, but is given too little weight now, by many people. Just look at many of the recent discussions, where security mindset seem rare, and many want to move fast forward.
Just wanted to say I appreciate the nuance you’re aiming at here. (Getting that nuance right is real hard)
Discussing specific examples seems very tricky—I can probably come up with a list of maybe 10 projects or actions which come with large downside/risks, but I would expect listing them would not be that useful and can cause controversy.
Few hypothetical examples
influencing mayor international regulatory organisation in a way leading to creating some sort of “AI safety certification” in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding
creating a highly distorted version of effective altruism in a mayor country e.g. by bad public outreach
coordinating effective altruism community in a way which leads to increased tension and possibly splits in the community
producing and releasing some infohazard research
influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice
A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.
I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman’s post is, and how many people gave input/feedback—I’d guess that’s several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I’d guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I’d say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.
The above-linked 80k article and EAG talk mention a lot of potential examples. I’m not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.