Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).
Can you give some examples here? What are some uncomfortable questions about AI safety (that a journalist might ask)?
Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it’s striking to see how efficient EA tries to be when it comes to measuring GiveWell’s impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).