In my opinion there is a probability of >10% that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like “You should read this first.”, “This is intermediate important stuff.” and “This is cutting edge research.” would be nice.
I’d mainly point to relatively introductory / high-level resources like Alignment research field guide and Risks from learned optimization, if you haven’t read them. I’m more confident in the relevance of methodology and problem statements than of existing attempts to make inroads on the problem.
In my opinion there is a probability of >10% that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like “You should read this first.”, “This is intermediate important stuff.” and “This is cutting edge research.” would be nice.
I’d mainly point to relatively introductory / high-level resources like Alignment research field guide and Risks from learned optimization, if you haven’t read them. I’m more confident in the relevance of methodology and problem statements than of existing attempts to make inroads on the problem.
There’s a lot of good high-level content on Arbital (https://arbital.com/explore/ai_alignment/), but it’s not very organized and a decent amount of it is in draft form.