Interesting perspective, although I’m not sure how much we actually disagree. “Complicated and open”, to me reads as “difficult”
Is there a rephrasing of the initial statement you would endorse that makes this clearer? I’d suggest “If you apply a security mindset (Murphy’s Law) to the problem of AI alignment, it should quickly become apparent that we do not currently possess the means to ensure that any given AI is safe.”
Yes, I would endorse that phrasing (maybe s/”safe”/”100% safe”). Overall I think I need to rewrite and extend the post to spell things out in more detail. Also change the title to something less provocative[1] because I get the feeling that people are knee-jerk downvoting without even reading it, judging by some of the comments (i.e. I’m having to repeat things I refer to in the OP).
Is there a rephrasing of the initial statement you would endorse that makes this clearer? I’d suggest “If you apply a security mindset (Murphy’s Law) to the problem of AI alignment, it should quickly become apparent that we do not currently possess the means to ensure that any given AI is safe.”
Yes, I would endorse that phrasing (maybe s/”safe”/”100% safe”). Overall I think I need to rewrite and extend the post to spell things out in more detail. Also change the title to something less provocative[1] because I get the feeling that people are knee-jerk downvoting without even reading it, judging by some of the comments (i.e. I’m having to repeat things I refer to in the OP).
perhaps “Why the most likely outcome of AGI is doom”?