One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.
Hi Håkon, Arden from 80k here.
Great questions.
On org structure:
One question for us is whether we want to create a separate website (“10,000 Hours?”), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That’s something we’re still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we’re not currently thinking about making an entire new organisation.
Why not?
For one thing, it’d be a lot of work and time, and we feel this shift is urgent.
Primarily, though, 80,000 Hours is a cause-impartial organisation, and we think that means prioritising the issues we think are most pressing (& telling our audience about why we think that.)
What would be the reason for keeping one 80k site instead of making a 2nd separate one?
As I wrote to Zach above, I think the site currently doesn’t represent the possibility of short timelines or the variety of risks AI poses well, even though it claims to be telling people key information they need to know to have a high impact career. I think that’s key information, so want it to be included very prominently.
As a commenter noted below, it’d take time and work to build up an audience for the new site.
But I’m not sure! As you say, there are reasons to make a separate site as well.
On EA pathways: I think Chana covered this well – it’s possible this will shrink the number of people getting into EA ways of thinking, but it’s not obvious. AI risk doesn’t feel so abstract anymore.
On reputation: this is a worry. We do plan to express uncertainty about whether AGI will indeed progress as quickly as we worry it will, and that if people pursue a route to impact that depends on fast AI timelines, that’s making a bet that might not pay off. However, we think it’s important both for us & for our audience to act under uncertainty, using rules of thumb but also thinking about expected impact.
In other words – yes, our reputation might suffer from this if AI progresses slowly. If that happens, it will probably be worse for our impact, but better for the world, and I think I’ll still feel good about expressing our (uncertain) views on this matter when we had them.