Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBF’s actions, so please update towards her not hating longtermism. Where she is against it is because it’s difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
“Futurism gives rationalization air to breathe because it decouples arguments from verification.”[2]
Another underlying approach is to be wary of the risks of optimisation which shouldn’t be too controversial? It reminds me of Galef’s ‘Straw Vulcan’ - relentlessly optimising towards your current idea of The Good doesn’t seem like a plausibly optimal strategy to me. It sounds very parsimonious with the ‘Moral Uncertainty’ approach.
“a small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if you’re encouraged to optimize for the proxy. It’s the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug.”
One main thrust around the piece is her concern with the institutional design of the EA space:
“Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty...”
In what direction would she like EA to move? In her own words:
“EA should offer itself as the testing ground for real innovation in institutional decision-making.”
We have a whole cause area about that! My prior is that it hasn’t had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they don’t feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
“It increasingly looks like a weird ideological cartel where, if you don’t agree with the power holders, you’re wasting your time trying to get anything done.”
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that there’s a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own community’s institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being “incomprehensible” didn’t sit right with me—Holden’s blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
In particular, the shot at Cold Takes being “incomprehensible” didn’t sit right with me—Holden’s blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.
Agree that her description of Holden’s thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as ‘radically unfamiliar… a future galaxy-wide civilization… seem[ing] too “wild” to take seriously… we live in a wild time, and should be ready for anything… This thesis has a wacky, sci-fi feel.’
(Cremer points to this as an example of an ‘often-incomprehensible fantasy about the future’)
Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBF’s actions, so please update towards her not hating longtermism. Where she is against it is because it’s difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
Another underlying approach is to be wary of the risks of optimisation which shouldn’t be too controversial? It reminds me of Galef’s ‘Straw Vulcan’ - relentlessly optimising towards your current idea of The Good doesn’t seem like a plausibly optimal strategy to me. It sounds very parsimonious with the ‘Moral Uncertainty’ approach.
One main thrust around the piece is her concern with the institutional design of the EA space:
In what direction would she like EA to move? In her own words:
We have a whole cause area about that! My prior is that it hasn’t had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they don’t feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that there’s a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own community’s institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being “incomprehensible” didn’t sit right with me—Holden’s blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
I think this is similar to criticism that Vaden Masrani made of the philosophy underlying longtermism.
Agree that her description of Holden’s thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as ‘radically unfamiliar… a future galaxy-wide civilization… seem[ing] too “wild” to take seriously… we live in a wild time, and should be ready for anything… This thesis has a wacky, sci-fi feel.’
(Cremer points to this as an example of an ‘often-incomprehensible fantasy about the future’)