I am familiar with this line of thinking, and I am pretty sympathetic to it. (I don’t think that literally breaking up universities, antitrust style, would lead to more research happening, but it might perhaps lead to research on more useful topics, or something like that. It might also help reduce cost of living for ordinary folks by limiting/taxing the amounts people spend on education-related signaling, which would be great.) I see “encouraging more competition in education”, which includes both taxing incumbent top schools like Harvard and also encouraging the formation of many new types of schools, as something that could be helpful to humanity from a progress-studies perspective of encouraging general economic growth and human thriving.
For better or worse, Effective Altruism often prefers to prioritize extremely heavily on the most effective cause areas, which can leave a lot of progress-studies-ish causes without a good place in EA even when their effects are pretty huge. Things like YIMBY, metascience, prediction markets, anti-aging research, charter cities, increased high-skill immigration, etc, might be huge boons for humanity, but these general interventions can sometimes feel like they’ve been orphaned by the EA movement, like “middle-term” cause areas lost between longtermism (which dominates on effectiveness) and neartermism (which prefers things to be empirically proveable and relatively non-political).
I say all this to explain that usually I am fighting on behalf of the middle-termist causes, arguing that prediction markets are a great general intervention for civilization, where many EAs would prefer to just use some prediction techniques for understanding AI timelines, and not bother trying to scale up markets and improve society’s epistemics overall.
But in this situation, the tables have turned!! Now I find myself in the opposite role—I agree with you that encouraging competition in higher education would be good and I hope it happens, but I am like “Meh, is this really such a big problem that it should become an important EA cause area?” Instead of this general intervention, why not do something more focused, like deliberately exploiting the broken higher-education signaling game by purchasing influence at an elite university and then using that platform to focus more energy on core cause areas like AI safety: https://forum.effectivealtruism.org/posts/CkEsn3gjaiWJfwHHr/what-brand-should-ea-buy-if-we-had-to-buy-one?commentId=GKp8cwXSpXp6Jfb8H
I am familiar with this line of thinking, and I am pretty sympathetic to it. (I don’t think that literally breaking up universities, antitrust style, would lead to more research happening, but it might perhaps lead to research on more useful topics, or something like that. It might also help reduce cost of living for ordinary folks by limiting/taxing the amounts people spend on education-related signaling, which would be great.) I see “encouraging more competition in education”, which includes both taxing incumbent top schools like Harvard and also encouraging the formation of many new types of schools, as something that could be helpful to humanity from a progress-studies perspective of encouraging general economic growth and human thriving.
For better or worse, Effective Altruism often prefers to prioritize extremely heavily on the most effective cause areas, which can leave a lot of progress-studies-ish causes without a good place in EA even when their effects are pretty huge. Things like YIMBY, metascience, prediction markets, anti-aging research, charter cities, increased high-skill immigration, etc, might be huge boons for humanity, but these general interventions can sometimes feel like they’ve been orphaned by the EA movement, like “middle-term” cause areas lost between longtermism (which dominates on effectiveness) and neartermism (which prefers things to be empirically proveable and relatively non-political).
I say all this to explain that usually I am fighting on behalf of the middle-termist causes, arguing that prediction markets are a great general intervention for civilization, where many EAs would prefer to just use some prediction techniques for understanding AI timelines, and not bother trying to scale up markets and improve society’s epistemics overall.
But in this situation, the tables have turned!! Now I find myself in the opposite role—I agree with you that encouraging competition in higher education would be good and I hope it happens, but I am like “Meh, is this really such a big problem that it should become an important EA cause area?” Instead of this general intervention, why not do something more focused, like deliberately exploiting the broken higher-education signaling game by purchasing influence at an elite university and then using that platform to focus more energy on core cause areas like AI safety: https://forum.effectivealtruism.org/posts/CkEsn3gjaiWJfwHHr/what-brand-should-ea-buy-if-we-had-to-buy-one?commentId=GKp8cwXSpXp6Jfb8H