Can you describe your meta process for deciding what analyses to work on and how to communicate them? Analyses about the future development of transformative AI can be extremely beneficial (including via publishing them and getting many people more informed). But getting many people more hyped about scaling up ML models, for example, can also be counterproductive. Notably, The Economist article that you linked to shows your work under the title “The blessings of scale”. (I’m not making here a claim that that particular article is net-negative; just that the meta process above is very important.)
Out of curiosity, when you “announce intention to publish a paper or blogpost,” how often has a staff member objected in the past, and how often has that led to major changes or not publishing?
I recall three in depth conversations about particular Epoch products. None of them led to a substantive change in publication and content.
OTOH I can think of at least three instances where we decided to not pursue projects or we edited some information out of an article guided by considerations like “we may not want to call attention about this topic”.
In general I think we are good at preempting when something might be controversial or could be presented in a less conspicuous framing, and acting on it.
Cool, that’s what I expected; I was just surprised by your focus in the above comment on intervening after something had already been written and on the intervention being don’t publish rather than edit.
Thinking about the ways publications can be harmful is something that I wish was practiced more widely in the world, specially in the field of AI.
That being said, I believe that in EA, and in particular in AI Safety, the pendulum has swung too far—we would benefit from discussing these issues more openly.
In particular, I think that talking about AI scaling is unlikely to goad major companies to invest much more in AI (there are already huge incentives). And I think EAs and people otherwise invested in AI Safety would benefit from having access to the current best guesses of the people who spend more time thinking about the topic.
This does not exempt the responsibility for Epoch and other people working on AI Strategy to be mindful of how their work could result in harm, but I felt it was important to argue for more openness in the margin.
Hey there!
Can you describe your meta process for deciding what analyses to work on and how to communicate them? Analyses about the future development of transformative AI can be extremely beneficial (including via publishing them and getting many people more informed). But getting many people more hyped about scaling up ML models, for example, can also be counterproductive. Notably, The Economist article that you linked to shows your work under the title “The blessings of scale”. (I’m not making here a claim that that particular article is net-negative; just that the meta process above is very important.)
OBJECT LEVEL REPLY:
Our current publication policy is:
Any Epoch staff member can object when we announce intention to publish a paper or blogpost.
We then have a discussion about it. If we conclude that there is a harm and that the harm outweights the benefits we refrain from publishing.
If no consensus is reached we discuss the issue with some of our trusted partners and seek advice.
Some of our work that is not published is instead disseminated privately on a case-by-case basis
We think this policy has a good mix of being flexible and giving space for Epoch staff to raise concerns.
Out of curiosity, when you “announce intention to publish a paper or blogpost,” how often has a staff member objected in the past, and how often has that led to major changes or not publishing?
I recall three in depth conversations about particular Epoch products. None of them led to a substantive change in publication and content.
OTOH I can think of at least three instances where we decided to not pursue projects or we edited some information out of an article guided by considerations like “we may not want to call attention about this topic”.
In general I think we are good at preempting when something might be controversial or could be presented in a less conspicuous framing, and acting on it.
Cool, that’s what I expected; I was just surprised by your focus in the above comment on intervening after something had already been written and on the intervention being don’t publish rather than edit.
Why’d you strong-downvote?
That´s a good point—I expect most of these discussions to lead to edits rather than publications.
I downvoted because 1) I want to discourage more conversation on the topic and 2) I think its bad policy to ask organizations if they have any projects they decided to keep secret (because if its true they might have to lie about it)
In hindsight I think I am overthinking this, and I retracted my downvotes on this thread of comments.
META LEVEL REPLY
Thinking about the ways publications can be harmful is something that I wish was practiced more widely in the world, specially in the field of AI.
That being said, I believe that in EA, and in particular in AI Safety, the pendulum has swung too far—we would benefit from discussing these issues more openly.
In particular, I think that talking about AI scaling is unlikely to goad major companies to invest much more in AI (there are already huge incentives). And I think EAs and people otherwise invested in AI Safety would benefit from having access to the current best guesses of the people who spend more time thinking about the topic.
This does not exempt the responsibility for Epoch and other people working on AI Strategy to be mindful of how their work could result in harm, but I felt it was important to argue for more openness in the margin.