The book “Careless People” starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.
I just finished Sarah Wynn-Williams’ recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.
1. The harms associated with the origins of our funding
The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from:
harms to adolescent mental health,
cooperation with authoritarian regimes,
and the erosion of democracy, even in the US and Europe.
These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company.
To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter.
Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don’t occasionally check your blind spot you might cause damage.
2. Ongoing risk from the same culture
Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk.
Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be completely surprising if such culture to some extent is being replicated across other labs and institutions involved in frontier AI.
3. Wynn-Williams is now focused on AI governance (e.g. risk of nuclear war)
In the final chapters, Wynn-Williams pivots toward global catastrophic risks: AI, great power conflict, and nuclear war.
Her framing is sober, high-context, and uncannily aligned with longtermist priorities. She seems to combine rare access (including relationships with heads of state), strategic clarity, and a grounded moral compass — the kind of person who can get in the room and speak truth to power. People recruiting for senior AI policy roles might want to reach out to her if they have not already.
I’m still not sure what the exact takeaway is. I just have a strong hunch this book matters more than I can currently articulate — and that Wynn-Williams herself may be an unusually valuable ally, mentor, or collaborator for those working on x-risk policy or institutional outreach.
If you’ve read it — or end up reading it — I’d be curious what it sparks for you. It works fantastically as an audiobook, a real page turner with lots of wit and vivid descriptions.
It’s not clear that EA funding relies on Facebook/Meta much anymore. The original tweet is deleted, and this post is 3 years old but Holden wrote of Cari and Dustin’s wealth:
I also note that META stock is not as large a part of their portfolio as some seem to assume
You could argue Facebook/Meta is what made Dustin wealthy originally, but probably not correct to say that EA funding “deeply relies” on Meta today.
Bloomberg’s valuation of Moskovitz’s fortune recently dropped by ~60% (from $30B to $11B) as his level of ownership of Meta was not significant enough to show up in their filings.
Note that Moskovitz’s involvement in Asana probably influences his perspective on AI:
For us, there was only one answer. We quickly assembled a dedicated AI team with a clear mandate: find innovative ways to weave the frontier models into the very fabric of Asana in a safe and reliable way. We collaborated closely with leading AI pioneers like Anthropic and OpenAI to accelerate our efforts and stay at the forefront. We launched Asana AI and brought to market powerful generative AI features like smart status updates, smart summaries, and smart chat. The process of taking these first steps unlocked our learning and led us to countless new ideas. We knew this was just the beginning.
...
...We could see the power and potential of integrating AI into Asana’s core collaborative features. With our Work Graph data model providing a perfect foundation for AI to understand the complex connections between goals, portfolios, projects, and tasks, we realized AI could work alongside human teammates in a natural way...
...
...We invite you to be the first to know about the latest developments in AI-powered collaborative work here...
I read the book a while back and I enjoyed it. It was kind of fun to get some juicy details about bad things inside Facebook. My main takeaway was something along the lines of “a fish rots from the head.” Leaders of an organization set priorities, direction, culture (to a great extent), and this books served as sort of a case study of leadership that has a fairly narrow focus. Poor social skills and poor common sense, entitlement, and the general idea that you get everything you want all stood out to me. The levels of sycophancy and self-interest were a bit surprising, but not terribly so.
Trying to apply ideas from this book to an EA context involves a bit of contortion, but in a sense I think that I’m not too concerned. The culture/values of EA tend to have a very different focus than Facebook did/does, and the leaders of the organizations often[1] tend to have better common sense than what was displayed in Careless People. I would find it hard to imagine most senior leaders at EA orgs throwing fits because of a McDonald’s meal or aggressively pushing for the promotion of their personal projects using company resources. If anything, the book left me thinking that EA orgs will probably avoid many of the issues described in Careless People due to the focus on ethics and morality.
This is certainly not always true, and the level of common sense or general knowledge is noticeably lower than I would like. I’ve seen many people with very little or very narrow life experience ask about or propose things that I found silly or poorly considered. But my impression is that this is an eye-catching minority, and it is much more common for organizations to have leadership teams are more mature.
The book “Careless People” starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.
I just finished Sarah Wynn-Williams’ recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.
1. The harms associated with the origins of our funding
The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s
deepreliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from:harms to adolescent mental health,
cooperation with authoritarian regimes,
and the erosion of democracy, even in the US and Europe.
These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company.
To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics.
But the systems that generated that wealth — and shaped the broader tech landscape could still matter.
Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don’t occasionally check your blind spot you might cause damage.
2. Ongoing risk from the same culture
Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk.
Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be completely surprising if such culture to some extent is being replicated across other labs and institutions involved in frontier AI.
3. Wynn-Williams is now focused on AI governance (e.g. risk of nuclear war)
In the final chapters, Wynn-Williams pivots toward global catastrophic risks: AI, great power conflict, and nuclear war.
Her framing is sober, high-context, and uncannily aligned with longtermist priorities. She seems to combine rare access (including relationships with heads of state), strategic clarity, and a grounded moral compass — the kind of person who can get in the room and speak truth to power. People recruiting for senior AI policy roles might want to reach out to her if they have not already.
I’m still not sure what the exact takeaway is. I just have a strong hunch this book matters more than I can currently articulate — and that Wynn-Williams herself may be an unusually valuable ally, mentor, or collaborator for those working on x-risk policy or institutional outreach.
If you’ve read it — or end up reading it — I’d be curious what it sparks for you. It works fantastically as an audiobook, a real page turner with lots of wit and vivid descriptions.
It’s not clear that EA funding relies on Facebook/Meta much anymore. The original tweet is deleted, and this post is 3 years old but Holden wrote of Cari and Dustin’s wealth:
You could argue Facebook/Meta is what made Dustin wealthy originally, but probably not correct to say that EA funding “deeply relies” on Meta today.
Bloomberg’s valuation of Moskovitz’s fortune recently dropped by ~60% (from $30B to $11B) as his level of ownership of Meta was not significant enough to show up in their filings.
(But Forbes’ estimate didn’t change much at $19B)
Note that Moskovitz’s involvement in Asana probably influences his perspective on AI:
https://asana.com/inside-asana/ai-transforming-work
Fixed! Thanks for pointing that out.
I read the book a while back and I enjoyed it. It was kind of fun to get some juicy details about bad things inside Facebook. My main takeaway was something along the lines of “a fish rots from the head.” Leaders of an organization set priorities, direction, culture (to a great extent), and this books served as sort of a case study of leadership that has a fairly narrow focus. Poor social skills and poor common sense, entitlement, and the general idea that you get everything you want all stood out to me. The levels of sycophancy and self-interest were a bit surprising, but not terribly so.
Trying to apply ideas from this book to an EA context involves a bit of contortion, but in a sense I think that I’m not too concerned. The culture/values of EA tend to have a very different focus than Facebook did/does, and the leaders of the organizations often[1] tend to have better common sense than what was displayed in Careless People. I would find it hard to imagine most senior leaders at EA orgs throwing fits because of a McDonald’s meal or aggressively pushing for the promotion of their personal projects using company resources. If anything, the book left me thinking that EA orgs will probably avoid many of the issues described in Careless People due to the focus on ethics and morality.
This is certainly not always true, and the level of common sense or general knowledge is noticeably lower than I would like. I’ve seen many people with very little or very narrow life experience ask about or propose things that I found silly or poorly considered. But my impression is that this is an eye-catching minority, and it is much more common for organizations to have leadership teams are more mature.