“EAs aren’t giving enough weight to longer AI timelines”
(The timelines until transformative AI are very uncertain. We should, of course, hedge against it coming early when we are least prepared, but currently that is less of a hedge and more of a full-on bet. I think we are unduly neglecting many opportunities that would pay off only on longer timelines.)
I think that this question will be better if it is framed not in terms of the EA community. This is because
The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there’s no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions.
For example, I like Dylan’s reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Whether it is true or not depends on the community and the point I’m making is primarily for EAs (and EA-adjacent people too). It might also be true for the AI safety and governance communities. I don’t think it is true in general though — i.e. most citizens and most politicians are not giving too little regard to long timelines. So I’m not sure the point can be made when removing this reference.
Also, I’m particularly focusing on the set of people who are trying to act rationally and altruistically in response to these dangers, and are doing so in a somewhat coordinated manner. e.g. a key aspect is that the portfolio is currently skewed towards the near-term.
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I’d like people in the community to think less about the community should think. Err.. oops..
Perhaps “Long timelines suggest significantly different approaches than short timelines” is more direct and under discussed?
I think median EA AI timelines are actually OK, it’s more that certain orgs and individuals (like AI 2027) have tended toward extremity in one way or another.
The point I’m trying to make is that we should have a probability distribution over timelines with a chance of short, medium or long — then we need to act given this uncertainty, with a portfolio of work based around the different lengths. So even if our median is correct, I think we’re failing to do enough work aimed at the 50% of cases that are longer than the median.
“EAs aren’t giving enough weight to longer AI timelines”
(The timelines until transformative AI are very uncertain. We should, of course, hedge against it coming early when we are least prepared, but currently that is less of a hedge and more of a full-on bet. I think we are unduly neglecting many opportunities that would pay off only on longer timelines.)
I think that this question will be better if it is framed not in terms of the EA community. This is because
The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there’s no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions.
For example, I like Dylan’s reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Whether it is true or not depends on the community and the point I’m making is primarily for EAs (and EA-adjacent people too). It might also be true for the AI safety and governance communities. I don’t think it is true in general though — i.e. most citizens and most politicians are not giving too little regard to long timelines. So I’m not sure the point can be made when removing this reference.
Also, I’m particularly focusing on the set of people who are trying to act rationally and altruistically in response to these dangers, and are doing so in a somewhat coordinated manner. e.g. a key aspect is that the portfolio is currently skewed towards the near-term.
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I’d like people in the community to think less about the community should think. Err.. oops..
Perhaps “Long timelines suggest significantly different approaches than short timelines” is more direct and under discussed?
I think median EA AI timelines are actually OK, it’s more that certain orgs and individuals (like AI 2027) have tended toward extremity in one way or another.
The point I’m trying to make is that we should have a probability distribution over timelines with a chance of short, medium or long — then we need to act given this uncertainty, with a portfolio of work based around the different lengths. So even if our median is correct, I think we’re failing to do enough work aimed at the 50% of cases that are longer than the median.
I think that is both correct and interesting as a proposition.
But the topic as phrased seems more likely to mire it in more timelines debate. Rather than this proposition, which is a step removed from:
1. What timelines and probability distributions are correct
2. Are EAs correctly calibrated
And only then do we get to
3. EAs are “failing to do enough work aimed at longer than median cases”.
- arguably my topic “Long timelines suggest significantly different approaches than short timelines” is between 2 & 3
I think of the opposite, EA aren’t giving enough weight to present AI harms.