Elite colleges hoard seats, fix prices, and capture government aid. Even worse, they increasingly produce shoddy research that can’t be replicated, and the research that can be replicated often isn’t particularly groundbreaking.
Breaking up elite colleges to fix anti-competitive conduct and the basic research bottleneck is a super high leverage solution for longterm progress. I’ll be writing more about these ideas going forward, but I just wanted to introduce myself and my substack with this forum post.
I agree with others here that it’s not clear whether undifferentiated scientific progress is good or bad at the current margin.
However, assuming scientific progress is good, I’m also not convinced that breaking up elite colleges will increase scientific progress. Some counterpoints:
Having the smartest people in the same room might increase net scientific progress
Giving them resources is probably good
(less certain) there might be increasing returns to scale, like maybe having 100 supersmart people in the same place is better than 20 places with 5 supersmart people each (since it’s easier for ideas etc to pollenate even beyond your small workgroup, it’s easier to have personality matches, etc).
(much less certain) colleges select on proxies for future power other than smarts. But this also isn’t clearly bad for scientific progress.
It’s probably easier to do science when you are from a more privileged background and have relatively less worries in life
I suppose all your points would be satisfied as long the breaking up of colleges happens in a to me pretty reasonable way e.g. by not forcing the new colleges to stay small and non-elite? I understood the main benefit of this to be to remove the current possibly suboptimal college administrations and to replace them with better management that avoids current problems.
I am familiar with this line of thinking, and I am pretty sympathetic to it. (I don’t think that literally breaking up universities, antitrust style, would lead to more research happening, but it might perhaps lead to research on more useful topics, or something like that. It might also help reduce cost of living for ordinary folks by limiting/taxing the amounts people spend on education-related signaling, which would be great.) I see “encouraging more competition in education”, which includes both taxing incumbent top schools like Harvard and also encouraging the formation of many new types of schools, as something that could be helpful to humanity from a progress-studies perspective of encouraging general economic growth and human thriving.
For better or worse, Effective Altruism often prefers to prioritize extremely heavily on the most effective cause areas, which can leave a lot of progress-studies-ish causes without a good place in EA even when their effects are pretty huge. Things like YIMBY, metascience, prediction markets, anti-aging research, charter cities, increased high-skill immigration, etc, might be huge boons for humanity, but these general interventions can sometimes feel like they’ve been orphaned by the EA movement, like “middle-term” cause areas lost between longtermism (which dominates on effectiveness) and neartermism (which prefers things to be empirically proveable and relatively non-political).
I say all this to explain that usually I am fighting on behalf of the middle-termist causes, arguing that prediction markets are a great general intervention for civilization, where many EAs would prefer to just use some prediction techniques for understanding AI timelines, and not bother trying to scale up markets and improve society’s epistemics overall.
But in this situation, the tables have turned!! Now I find myself in the opposite role—I agree with you that encouraging competition in higher education would be good and I hope it happens, but I am like “Meh, is this really such a big problem that it should become an important EA cause area?” Instead of this general intervention, why not do something more focused, like deliberately exploiting the broken higher-education signaling game by purchasing influence at an elite university and then using that platform to focus more energy on core cause areas like AI safety: https://forum.effectivealtruism.org/posts/CkEsn3gjaiWJfwHHr/what-brand-should-ea-buy-if-we-had-to-buy-one?commentId=GKp8cwXSpXp6Jfb8H
it’s not obvious that undifferentiated scientific progress is net bad either. Scientific progress increases our wealth and allows for us to spend a larger fraction on safety than we otherwise would. I’d much prefer to live in a world where we can afford both nukes and safety measures, than the world in which we only could afford the nukes.
Scientific progress has been the root of so much progress, I think we should have a strong prior that more of it is good!
See discussion here.