This seems very likely.
Your model looks at loss of existing skills—I wonder if you’ve considered children and young people who never have the opportunity to experience the friction and learn the skills in the first place?
This seems very likely.
Your model looks at loss of existing skills—I wonder if you’ve considered children and young people who never have the opportunity to experience the friction and learn the skills in the first place?
Wholeheartedly agree, and I think that the principles apply more widely across a range of cause areas. The right people not being involved in design solutions can mean external people coming in with “clever” solutions can lead to Play Pump type problems, as well as the more general suboptimal “making the best of what’s already out there” type issues you highlight.
I’m pretty new to EA, so maybe I’ve missed it, but I’ve not seen any discussion of User Centred Design in an EA context. UCD feels like an approach which helps to make sure that solutions are solving the right problems. Does EA need to embrace UCD more?
I want to be clear—I don’t think these people haven’t achieved anything or done good, but e.g. 80,000 hours’ impact is indirect rather than direct. I’m not saying we shouldn’t celebrate these people, but if we only focus on community building/meta level activity, then there’s a risk EA ends up in a level of abstraction/MLM kind of space. My point was I don’t think we should only celebrate EAs who create public discourse and the infrastructure to support more people becoming (better) EAs.
Yes, we are in total agreement. https://gradual-disempowerment.ai/ is a scary and relevant description of the concentration of wealth and power.
I think it’s about the framing of AI for good. The “AI for good” narrative is most looking at “what can AI do?”, and as you say, this just leads to sticking plasters—and at worst, it’s technical people designing solutions to problems they don’t really understand.
I think the question in AI for good instead needs to be “How do we do AI?”. This means looking at how the public are involved in development of AI, how people can have a stake, how the public can help to oversee and benefit from AI, rather than corporations.
https://publicai.network/ are making headway on some of this thinking.
Personally, I don’t think that there’s a tension between niche applications of AI and governance/counter power AI systems. I think the answer is to create the niche applications with the public, and in ways that empower the public. For example, how can the public have greater control over their data and share in the profits from its use in AI?
I’m not sure I agree with the premise of this argument: that the concept of AI for good is faulty, because it can’t solve all the problems.
I don’t think “AI for good” claims to solve all the problems. Absolutely let’s take issue with the idea that AI is going to resolve everything, but that doesn’t mean it can’t help with anything.
But I’m not worried that AI won’t touch the fundamental problems of “social structures, economic pressures, and unequal opportunities”. I’m worried that it already is, and is moving the dial in the wrong direction. Automation moves wealth and power away from individuals and towards companies. Concentration of wealth and power in the hands of ever smaller number of individuals and companies is exactly what drives economic, social issues and inequality.
Unless AI is governed and managed appropriately, it’s going to be part of the problem, more than part of the solution.
I think this op-ed sets out some of these issues really well: https://nathanlawkc.substack.com/p/its-time-to-build-a-democracy-ai
I note that the suggested role models are all thinkers rather than doers. I worry that in a world of influencers and celebrities, we celebrate public profile more than concrete impact. Yes, influencers can lead to concrete impact- but if everyone wants to be an influencer or a public intellectual, and sees that as the most impactful thing to do, then who’s actually going to do the hard work of changing laws, of earning to give, of actual concrete steps that reduce suffering.
All of which to say: show me your role models who have directly improved the world, not just the people who have told others that they should.
Lots of the experience described here, of living in the “fast world”, has significant overlap with manic or hypomanic episodes for those who experience bipolar disorder. A balance of fast and slow might be essential for some people, in order to maintain mental health / a grip on reality.
I’m not at all trying to diagnose the author: it may well be that some folks can experience these things in a perfectly mentally healthy way.
However, I know from my own experience that the ‘fast world’ of mania or hypomania can be just as, or even more damaging, than despair and depression.
Feeling an overriding sense of urgency about a topic of extreme importance, talking and thinking much faster than usual, being unusually productive and working much longer hours, a sense of self importance or need to share one’s unique insights, feeling that one is special or has a particular power over the future of humanity, feeling a sense of alienation from social norms, making decisions that others might consider reckless, greater need to enjoy life and indulge in hedonism—all of these resonate very strongly with manic or hypomanic episodes.
Now, I’m not saying the author is doing all those things, but they are things which the fast world can encourage. And you know what, it’s really hard to distinguish between grandiose delusions and genuine rational conclusions when you are working on things which do feel like they are immediately critical to the future of humanity.
If others can be stable, sane and genuinely productive living in a world of fast work and fast life, I wish them all the best. But for others, some balance and slowness might be required to maintain mental health. Unsustainable productivity is not morally superior if it leads to burnout, opportunity cost, and increased suffering.
Agreed on Tay-Sachs and other diseases which cause suffering.
That’s not the same as gene-editing and embryo selection for “smarter” kids. That’s making a moral judgement about the value of someone’s life based on their intelligence. By your logic, if we tell people not to drink alcohol when pregnant, then we should also prevent those with lower intelligence from passing on their genes?
I think the problem is your argument wasn’t for “happy” children, it was for “smart and healthy” children. And that’s where it sounds a bit eugenicist.
What if being particularly intelligent makes people less happy? The evidence is mixed, but I rather suspect there are many EAs who wouldn’t necessarily see their intelligence as a source of happiness, but neither would they choose to give it up.
And with health, the same challenge applies. Neurodivergence is probably over-represented amongst EAs, but I don’t think many people are saying it shouldn’t exist.
I believe that genetic and phenotype diversity is beneficial to any population. And from a human perspective, I believe differences of experience are culturally and morally valuable—in that they force us to expand our empathy to others who are not like us. Activity that has the effect of limiting that diversity, and entrenching economic inequality, has the potential to have net negative impacts on humanity, even if there are benefits at the individual level.
This is well written and engaging, thank you.
I’m coming in with a cleaner-adjacent perspective.
I have many friends who espouse many of these ideas. They talk about systems level change. They talk about the challenge of imagining anything outside of the current paradigm. They talk about non-zero sum approaches, they talk about moving beyond competition. They talk about economic growth being a poor measure of utility.
They don’t call themselves “metacrisis” people, though, and they don’t see their enemy as “modernity”.
They call themselves anarchists and they see capitalism as the enemy.
That sounds like I’m putting your argument down—I’m not. Anarchism is poorly defined and so often misunderstood and I don’t think it actually has the answers.
But I don’t think the concept of the metacrisis is preparadigmatic—I think so many of concepts are there within the anarchists, anticapitalists, post modernists or critical theorists. And at the same time many of the concepts and the challenges are also there in the underpinnings of populism and authoritarianism.
As far as I can see, the metacrisis an issue of systems of power which do not deliver optimal outcomes. This is by no means a preparadigmatic problem. It’s not a problem we’ve solved, but let’s not pretend it’s a new concept.
“Someone could create some popular media that depicts Taiwan resisting (A Formosan “Red Dawn”)”
This is a really interesting idea. In the UK at least, some of the most impactful publiv campaigns have been through TV and film drama—I’m thinking Threads, Adolescence, Mr Bates, but the list goes on.
Are there any EA organisations that fund creation of popular media in order to deliver impact? If not, why not?
Thanks, I have amended title for clarity.
I think your final point is part of what I am getting at—that policy doesn’t happen on the basis of good ideas alone, and there is bureaucracy, and there operational, political, and budgetary constraints on what can be done. As such, just suggesting good ideas or compelling arguments to politicians or civil servants isn’t what works, whether as a civil servant or a campaigner. What works is providing neat approaches which work within the system, which use the political, budgetary and operational factors in their favour, and which make your peers look good, perhaps even giving them the chance to claim it as their idea.
There’s huge friction within the system, and you can either see that friction as a barrier, or you can see the friction as the thing that gives you grip, the fixed points that you can use as leverage.
That’s what I mean by policy being the art of the possible.
Whilst people here will be swayed by evidence and logical argument, we risk wasting resources if we act on the assumption that they are sufficient to make impact in the wider world.
As a government official, I think you vastly over-estimate the level of knowledge, responsiveness and expertise of top government officials, especially when it comes to medium-term new and emerging risks. Personally I would discount this data point from your analysis entirely.
Yes, I completely agree with this. I wrote a post coming from a similar perspective:
https://forum.effectivealtruism.org/posts/ZHLsvuBhydFMtmYcL/a-flaw-in-the-influence-uk-policymakers-to-make-an-impact
Essentially, I think there’s a point where people’s theory of change gets fuzzy, and they think that the right evidence, information, or technical solution will be sufficient for success because national/international policymakers will adopt the findings/recommendations.
Policy development is a skill that can be learned—it’s not an unknown field, it’s just not a skill that is within most academics’ or tech developers’ experience.
I would also suggest that looking for broad policy solutions that are panaceas is about as realistic as hoping to find a single bit of code that makes AI safe. An international agreement on AI is not a stand-alone thing, it’s going to be built on the foundations of national/ sector-level/ state or region level / industry-level experience, guidelines, legislation and regulations, which build over time. Policy is about making steps in the right direction, there’s never a silver bullet.
Feel free to message me if you would like to discuss hands on experience of AI policymaking in more detail.