Thanks for the post. I’m not sure if it’s relevant, but Open Philanthropy gave WEF a $50k donation regarding AI risks last year, and there’s a position about AI in WEF on the 80kh job board.
My answers: yes, it’s helpful—even if their assessment is not “according to EA standards,” which is not a problem (we can’t use only info from EA community).
I think EAs should want to influence the report, because it seems to be influential and sort of aligned.
Thanks for the post. I’m not sure if it’s relevant, but Open Philanthropy gave WEF a $50k donation regarding AI risks last year, and there’s a position about AI in WEF on the 80kh job board. My answers: yes, it’s helpful—even if their assessment is not “according to EA standards,” which is not a problem (we can’t use only info from EA community). I think EAs should want to influence the report, because it seems to be influential and sort of aligned.