I wasn’t suggesting only hiring people who believe in short-timelines. I believe that my original post adequately lays out my position, but if any points are ambiguous, feel free to request clarification.
I don’t know how Epoch AI can both “hire people with a diversity of viewpoints in order to counter bias” and ensure that your former employees won’t try to “cash in on the AI boom in an acceleratory way”. These seem like incompatible goals.
I think Epoch has to either:
Accept that people have different views and will have different ideas about what actions are ethical, e.g., they may view creating an AI startup focused on automating labour as helpful to the world and benign
or
Only hire people who believe in short AGI timelines and high AGI risk and, as a result, bias its forecasts towards those conclusions
Presumably there are at least some people who have long timelines, but also believe in high risk and don’t want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It’s hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don’t have to be raging dogmatists to worry about this happening again, and it’s reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I’m less surely merely catastrophic biorisk from human misuse is low sadly.
Why don’t we ask ChatGPT? (In case you’re wondering, I’ve read every word of this answer and I fully endorse it, though I think there are better analogies that the journalism example ChatGPT used).
Hopefully, this clarifies a possible third option (one that my original answer was pointing at).
I think there is a third option, though it’s messy and imperfect. The third option is to:
Maintain epistemic pluralism at the level of research methods and internal debate, while being selective about value alignment on key downstream behaviors.
In other words:
You hire researchers with a range of views on timelines, takeoff speeds, and economic impacts, so long as they are capable of good-faith engagement and epistemic humility.
But you also have clear social norms, incentives, and possibly contractual commitments around what counts as harmful conflict of interest — e.g., spinning out an acceleratory startup that would directly undermine the mission of your forecasting work.
This requires drawing a distinction between research belief diversity and behavioral alignment on high-stakes actions. That’s tricky! But it’s not obviously incoherent.
The key mechanism that makes this possible (if it is possible) is something like:
“We don’t need everyone to agree on the odds of doom or the value of AGI automation in theory. But we do need shared clarity on what types of action would constitute a betrayal of the mission or a dangerous misuse of privileged information.”
So you can imagine hiring someone who thinks timelines are long and AGI risk is overblown but who is fully on board with the idea that, given the stakes, forecasting institutions should err on the side of caution in their affiliations and activities.
This is analogous to how, say, journalists might disagree about political philosophy but still share norms about not taking bribes from the subjects they cover.
Caveats and Challenges:
Enforceability is hard. Noncompetes are legally dubious in many jurisdictions, and “cash in on the AI boom” is vague enough that edge cases will be messy. But social signaling and community reputation mechanisms can still do a lot of work here.
Self-selection pressure remains. Even if you say you’re open to diverse views, the perception that Epoch is “aligned with x-risk EAs” might still screen out applicants who don’t buy the core premises. So you risk de facto ideological clustering unless you actively fight against that.
Forecasting bias could still creep in via mission alignment filtering. Even if you welcome researchers with divergent beliefs, if the only people willing to comply with your behavioral norms are those who already lean toward the doomier end of the spectrum, your epistemic diversity might still collapse in practice.
Summary:
The third option is:
Hire for epistemic virtue, not belief conformity, while maintaining strict behavioral norms around acceleratory conflict of interest.
It’s not a magic solution — it requires constant maintenance, good hiring processes, and clarity about the boundaries between “intellectual disagreement” and “mission betrayal.” But I think it’s at least plausible as a way to square the circle.”
So, you want to try to lock in AI forecasters to onerous and probably illegal contracts that forbid them from founding an AI startup after leaving the forecasting organization? Who would sign such a contract? This is even worse than only hiring people who are intellectually pre-committed to certain AI forecasts. Because it goes beyond a verbal affirmation of their beliefs to actually attempting to legally force them to comply with the (putative) ethical implications of certain AI forecasts.
If the suggestion is simply promoting “social norms” against starting AI startups, well, that social norm already exists to some extent in this community, as evidenced by the response on the EA Forum. But if the norm is too weak, it won’t prevent the undesired outcome (the creation of an AI startup), and if the norm is too strong, I don’t see how it doesn’t end up selecting forecasters for intellectual conformity. Because non-conformists would not want to go along with such a norm (just like they wouldn’t want to sign a contract telling them what they can and can’t do after they leave the forecasting company).
I wasn’t suggesting only hiring people who believe in short-timelines. I believe that my original post adequately lays out my position, but if any points are ambiguous, feel free to request clarification.
I don’t know how Epoch AI can both “hire people with a diversity of viewpoints in order to counter bias” and ensure that your former employees won’t try to “cash in on the AI boom in an acceleratory way”. These seem like incompatible goals.
I think Epoch has to either:
Accept that people have different views and will have different ideas about what actions are ethical, e.g., they may view creating an AI startup focused on automating labour as helpful to the world and benign
or
Only hire people who believe in short AGI timelines and high AGI risk and, as a result, bias its forecasts towards those conclusions
Is there a third option?
Presumably there are at least some people who have long timelines, but also believe in high risk and don’t want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It’s hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don’t have to be raging dogmatists to worry about this happening again, and it’s reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I’m less surely merely catastrophic biorisk from human misuse is low sadly.
Why don’t we ask ChatGPT? (In case you’re wondering, I’ve read every word of this answer and I fully endorse it, though I think there are better analogies that the journalism example ChatGPT used).
Hopefully, this clarifies a possible third option (one that my original answer was pointing at).
So, you want to try to lock in AI forecasters to onerous and probably illegal contracts that forbid them from founding an AI startup after leaving the forecasting organization? Who would sign such a contract? This is even worse than only hiring people who are intellectually pre-committed to certain AI forecasts. Because it goes beyond a verbal affirmation of their beliefs to actually attempting to legally force them to comply with the (putative) ethical implications of certain AI forecasts.
If the suggestion is simply promoting “social norms” against starting AI startups, well, that social norm already exists to some extent in this community, as evidenced by the response on the EA Forum. But if the norm is too weak, it won’t prevent the undesired outcome (the creation of an AI startup), and if the norm is too strong, I don’t see how it doesn’t end up selecting forecasters for intellectual conformity. Because non-conformists would not want to go along with such a norm (just like they wouldn’t want to sign a contract telling them what they can and can’t do after they leave the forecasting company).