While I like it, stories often sensationalize these issues such as “AI by 2027 we’re all gonna die” without providing good actionable steps. It almost feels like the climate change crisis by environmentalists that say “We’re all gonna die by 2030 because of climate change! Protest in the streets!”
I know stories are very effective in communicating the urgency of AGI, and the end of the video has some resources about going to 80k. Nonetheless, I feel some dread such as “oh gosh, there’s nothing I can do,” and that is likely compounded by YouTube’s younger audience (for example college students will graduate after 2027).
Therefore, I suggest the later videos should give actionable steps or areas if someone wants to reduce risks from AI. Not only will it relieve the doomerism but it will give actual relevant advice for people to work on AI.
While I like it, stories often sensationalize these issues such as “AI by 2027 we’re all gonna die” without providing good actionable steps. It almost feels like the climate change crisis by environmentalists that say “We’re all gonna die by 2030 because of climate change! Protest in the streets!”
I know stories are very effective in communicating the urgency of AGI, and the end of the video has some resources about going to 80k. Nonetheless, I feel some dread such as “oh gosh, there’s nothing I can do,” and that is likely compounded by YouTube’s younger audience (for example college students will graduate after 2027).
Therefore, I suggest the later videos should give actionable steps or areas if someone wants to reduce risks from AI. Not only will it relieve the doomerism but it will give actual relevant advice for people to work on AI.