The responsible thing to do is to go look at the balance of what experts in a field are saying, and in this case, they’re fairly split
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like “who cares, ASI is a long way off.” Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today’s ML experts were still dismissive of AI risk, that wouldn’t make me more optimistic.
Oh, I agree that if one feels equipped to go actually look at the arguments, one doesn’t need any argument-from-consensus. This is just, like, “if you are going to defer, defer reasonably.” Thanks for your comment; I feel similarly/endorse.
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like “who cares, ASI is a long way off.” Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today’s ML experts were still dismissive of AI risk, that wouldn’t make me more optimistic.
Oh, I agree that if one feels equipped to go actually look at the arguments, one doesn’t need any argument-from-consensus. This is just, like, “if you are going to defer, defer reasonably.” Thanks for your comment; I feel similarly/endorse.
Made a small edit to reflect.