A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.
I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman’s post is, and how many people gave input/feedback—I’d guess that’s several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I’d guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I’d say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
The above-linked 80k article and EAG talk mention a lot of potential examples. I’m not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.
A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.
I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman’s post is, and how many people gave input/feedback—I’d guess that’s several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I’d guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I’d say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.
The above-linked 80k article and EAG talk mention a lot of potential examples. I’m not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.