My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis) For what it’s worth, their subreddit seems a bit ambivalent about explicit “effective altruism” connections (see here or here)
Btw, I would be surprised if the ITN framework was independently developed from first principles:
He says exactly the same 3 things in the same order
They have known about effective altruism for at least 11 years (see the top comment here)
There have been many effective altruism themed videos in their “Project for Awesome” campaign several years
They have collaborated several times with 80,000 hours and Giving What We Can
There are many other reasonable things you can come up with (e.g. urgency)
Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.
Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).
Hopefully this is auspicious for things to come?
Hank Green should attend an EAG next year.
Only if someone’s inviting him perhaps? @akash 🔸
so true
My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis)
For what it’s worth, their subreddit seems a bit ambivalent about explicit “effective altruism” connections (see here or here)
Btw, I would be surprised if the ITN framework was independently developed from first principles:
He says exactly the same 3 things in the same order
They have known about effective altruism for at least 11 years (see the top comment here)
There have been many effective altruism themed videos in their “Project for Awesome” campaign several years
They have collaborated several times with 80,000 hours and Giving What We Can
There are many other reasonable things you can come up with (e.g. urgency)