I feel like I have a much better sense of what the current approaches to alignment are, what people are working on and how underdeveloped the field is. In general, it’s been a while since I’ve spent time studying anything so it felt fun just to dedicate time to learning. It also felt empowering to take a field that I’ve heard a lot about at a high level and make it clearer in my mind.
I think doing the Week 0 readings are an easy win for anyone who wants to demystify some of what is going on in ML systems, which I think should be interesting to anyone, even if you’re not interested in alignment.
I became much more motivated to work on making AI go well over the period of the course, I think mainly because it made the problem more concrete but likely just spending more time thinking about it. That said, it’s hard to disentangle this increased motivation from recent events and other factors.
I feel like I have a much better sense of what the current approaches to alignment are, what people are working on and how underdeveloped the field is. In general, it’s been a while since I’ve spent time studying anything so it felt fun just to dedicate time to learning. It also felt empowering to take a field that I’ve heard a lot about at a high level and make it clearer in my mind.
I think doing the Week 0 readings are an easy win for anyone who wants to demystify some of what is going on in ML systems, which I think should be interesting to anyone, even if you’re not interested in alignment.
I became much more motivated to work on making AI go well over the period of the course, I think mainly because it made the problem more concrete but likely just spending more time thinking about it. That said, it’s hard to disentangle this increased motivation from recent events and other factors.