Yup, I’d say that from the perspective of someone who wants a good AI safety (/EA/X-risk) student community, Harvard is the best place to be right now (I say this as an organizer, so grain of salt). Not many professional researchers in the area though which is sad :(
As for the actual college side of Harvard, here’s my experience (as a sophomore planning to do alignment):
Harvard doesn’t seem to have up-to-date CS classes for ML. If you want to learn modern ML, you’re on your own
(or with your friends! many HAIST people have self-studied ML together or taken MLAB)
You can get credit for alignment upskilling through independent study, making your non-alignment workload even smaller. I’m planning to do this at some point and might have thoughts later
There are some great linear algebra and probability classes at Harvard, both of which are very useful for AI safety
Prereqs seem super flexible most of the time. I’ve applied to at least 2 or 3 classes without having the formal prereqs in place and a few sentences describing my experience were enough to get me in every time.
There are some required classes (such as a set of a few GENED courses) which will probably not be very useful for alignment, but you can make all of them either fun or basically zero-effort. One of them, Evolving Morality: From Primordial Soup to Superintelligent Machines, is partly about AI safety and it’s great! Strongly recommend taking it at some point.
If community building potential is part of your decision process, then I would consider not going to Harvard, as there are a bunch of people there doing great things. MIT/Stanford/other top unis in general seem much more neglected in that regard, so if you could see yourself doing communty building I’d keep that in mind.
Minor nitpick but I don’t think any of the organizers were running it full-time. I know of three who were close to that level, but the full-time ops people do ops for multiple orgs and the full-time alignment people spend some time doing alignment research, not just running HAIST.
But you are right that HAIST has lots of organizers and tons of programs, and I’d go as far as to say it’s probably the best place in the world to be a first-year college student interested in learning about alignment right now. The only downside is that there aren’t a lot of professional alignment researchers, but that problem exists everywhere. Perhaps Berkeley (specifically CHAI) is better in that regard.
HAIST is probably the best AI safety group in the country, they have office space quite near campus and several full time organizers.
Yup, I’d say that from the perspective of someone who wants a good AI safety (/EA/X-risk) student community, Harvard is the best place to be right now (I say this as an organizer, so grain of salt). Not many professional researchers in the area though which is sad :(
As for the actual college side of Harvard, here’s my experience (as a sophomore planning to do alignment):
Harvard doesn’t seem to have up-to-date CS classes for ML. If you want to learn modern ML, you’re on your own
(or with your friends! many HAIST people have self-studied ML together or taken MLAB)
Grade inflation is huge. You can get most of a degree by doing around 15-20 hours of schoolwork a week if you half-ass it with everything you’ve got
You can get credit for alignment upskilling through independent study, making your non-alignment workload even smaller. I’m planning to do this at some point and might have thoughts later
There are some great linear algebra and probability classes at Harvard, both of which are very useful for AI safety
Prereqs seem super flexible most of the time. I’ve applied to at least 2 or 3 classes without having the formal prereqs in place and a few sentences describing my experience were enough to get me in every time.
There are some required classes (such as a set of a few GENED courses) which will probably not be very useful for alignment, but you can make all of them either fun or basically zero-effort. One of them, Evolving Morality: From Primordial Soup to Superintelligent Machines, is partly about AI safety and it’s great! Strongly recommend taking it at some point.
If community building potential is part of your decision process, then I would consider not going to Harvard, as there are a bunch of people there doing great things. MIT/Stanford/other top unis in general seem much more neglected in that regard, so if you could see yourself doing communty building I’d keep that in mind.
Minor nitpick but I don’t think any of the organizers were running it full-time. I know of three who were close to that level, but the full-time ops people do ops for multiple orgs and the full-time alignment people spend some time doing alignment research, not just running HAIST.
But you are right that HAIST has lots of organizers and tons of programs, and I’d go as far as to say it’s probably the best place in the world to be a first-year college student interested in learning about alignment right now. The only downside is that there aren’t a lot of professional alignment researchers, but that problem exists everywhere. Perhaps Berkeley (specifically CHAI) is better in that regard.