I’m a little unclear on what we can actually do there that will help at this early stage
I’d suggest that this is a failure of imagination (sorry, I’m really not trying to criticise you, but I can’t find another phrase that captures my meaning!)
Like let’s just take it for granted that we aren’t going to be able to make any real research progress until we’re much closer to AGI. It still seems like there are several useful things we could be doing:
• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren’t like “That’s stupid, why would you care about that!”. Note that views tend to change generationally, so you need to start here early.
• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.
• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.
• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.
• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.
We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.
(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).
I’d suggest that this is a failure of imagination (sorry, I’m really not trying to criticise you, but I can’t find another phrase that captures my meaning!)
Like let’s just take it for granted that we aren’t going to be able to make any real research progress until we’re much closer to AGI. It still seems like there are several useful things we could be doing:
• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren’t like “That’s stupid, why would you care about that!”. Note that views tend to change generationally, so you need to start here early.
• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.
• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.
• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.
• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.
We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.
(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).