Discussing specific examples seems very tricky—I can probably come up with a list of maybe 10 projects or actions which come with large downside/risks, but I would expect listing them would not be that useful and can cause controversy.
Few hypothetical examples
influencing mayor international regulatory organisation in a way leading to creating some sort of “AI safety certification” in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding
creating a highly distorted version of effective altruism in a mayor country e.g. by bad public outreach
coordinating effective altruism community in a way which leads to increased tension and possibly splits in the community
producing and releasing some infohazard research
influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice
Discussing specific examples seems very tricky—I can probably come up with a list of maybe 10 projects or actions which come with large downside/risks, but I would expect listing them would not be that useful and can cause controversy.
Few hypothetical examples
influencing mayor international regulatory organisation in a way leading to creating some sort of “AI safety certification” in a situation where we don’t have the basic research yet, creating false sense of security/fake sense of understanding
creating a highly distorted version of effective altruism in a mayor country e.g. by bad public outreach
coordinating effective altruism community in a way which leads to increased tension and possibly splits in the community
producing and releasing some infohazard research
influencing important players in AI or AI safety in a harmful leveraged way, e.g. by bad strategic advice