Here’s a thought you might have: “AI timelines are so short, it’s significantly reducing the net present value[1] of {high school, college} students.” I think that view is tempting based on eg (1, 2). However, I claim that there’s a “play to your outs” effect here. Not in a “AGI is hard” way, but “we slow down the progress of AI capability development”.
In those worlds, we get more time. And, with that more time, it sure would seem great if we could have a substantial fraction of the young workforce care about X-Risk / be thinking with EA principles. Given the success we’ve had historically in convincing young people of the value of these ideas, it still seems pretty promising to have some portion of our community continue to put work into doing so.
Many young people (zoomers) care a lot about climate change, afaik. I think adding the hopefully-small thought of “how about going over all things that might destroy the world and prioritizing them (instead of staying with the first one you found)” might go a long way, maybe
Here’s a thought you might have: “AI timelines are so short, it’s significantly reducing the net present value[1] of {high school, college} students.” I think that view is tempting based on eg (1, 2). However, I claim that there’s a “play to your outs” effect here. Not in a “AGI is hard” way, but “we slow down the progress of AI capability development”.
In those worlds, we get more time. And, with that more time, it sure would seem great if we could have a substantial fraction of the young workforce care about X-Risk / be thinking with EA principles. Given the success we’ve had historically in convincing young people of the value of these ideas, it still seems pretty promising to have some portion of our community continue to put work into doing so.
By this I mean, the lifetime impact of the student, discounted over time.
Many young people (zoomers) care a lot about climate change, afaik. I think adding the hopefully-small thought of “how about going over all things that might destroy the world and prioritizing them (instead of staying with the first one you found)” might go a long way, maybe