Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBFâs actions, so please update towards her not hating longtermism. Where she is against it is because itâs difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
âFuturism gives rationalization air to breathe because it decouples arguments from verification.â[2]
Another underlying approach is to be wary of the risks of optimisation which shouldnât be too controversial? It reminds me of Galefâs âStraw Vulcanâ - relentlessly optimising towards your current idea of The Good doesnât seem like a plausibly optimal strategy to me. It sounds very parsimonious with the âMoral Uncertaintyâ approach.
âa small error between a measure of that which is good to do and that which is actually good to do suddenly makes a big difference fast if youâre encouraged to optimize for the proxy. Itâs the difference between recklessly sprinting or cautiously stepping in the wrong direction. Going slow is a feature, not a bug.â
One main thrust around the piece is her concern with the institutional design of the EA space:
âInstitutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty...â
In what direction would she like EA to move? In her own words:
âEA should offer itself as the testing ground for real innovation in institutional decision-making.â
We have a whole cause area about that! My prior is that it hasnât had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they donât feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
âIt increasingly looks like a weird ideological cartel where, if you donât agree with the power holders, youâre wasting your time trying to get anything done.â
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that thereâs a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own communityâs institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.
Agree that her description of Holdenâs thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as âradically unfamiliar⌠a future galaxy-wide civilization⌠seem[ing] too âwildâ to take seriously⌠we live in a wild time, and should be ready for anything⌠This thesis has a wacky, sci-fi feel.â
(Cremer points to this as an example of an âoften-incomprehensible fantasy about the futureâ)
Reflecting on the piece as a whole, I think there are some very legitimate concerns being brought up, and I think that Cremer mostly comes across well (as she has consistently imo, regardless of whether you agree or disagree with her specific ideas for reform) - with the exception of a few potshots laced into the piece[1]. I think it would be a shame if she were to completely disavow engaging with the community, so I hope where this is disagreement we can be constructive, and where there is agreement we can actually act rather than just talk about it.
Some specific points from the article:
She does not think that longtermism or utilitarianism was the prime driver behind SBFâs actions, so please update towards her not hating longtermism. Where she is against it is because itâs difficult to have good epistemic feedback loops to deciding whether our ideas and actions actually are doing the most good (or even just doing good better):
Another underlying approach is to be wary of the risks of optimisation which shouldnât be too controversial? It reminds me of Galefâs âStraw Vulcanâ - relentlessly optimising towards your current idea of The Good doesnât seem like a plausibly optimal strategy to me. It sounds very parsimonious with the âMoral Uncertaintyâ approach.
One main thrust around the piece is her concern with the institutional design of the EA space:
In what direction would she like EA to move? In her own words:
We have a whole cause area about that! My prior is that it hasnât had as much sunlight as other EA cause areas though.
There are some fairly upsetting quotes about people who have contacted her because they donât feel like they can voice those doubts openly. I wish we could find a way to remove that fear asap.
Summary:
On a second reading, there were a few more potshots than I initially remembered, but I suppose this a Vox article and not an actually set of reform proposal - something more like that can probably be found in the Democratising Risk article itself.
But I genuinely think that thereâs a lot of value here for us to learn from. And I hope that we can operationalise some ways to improve our own communityâs institutions so that the EA at the end of 2023 looks much healthier than the one right now.
In particular, the shot at Cold Takes being âincomprehensibleâ didnât sit right with meâHoldenâs blog is a really clear presentation of the idea that misaligned AI can have significant effects on the long-run future, regardless of whether you agree with it or not.
I think this is similar to criticism that Vaden Masrani made of the philosophy underlying longtermism.
Agree that her description of Holdenâs thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as âradically unfamiliar⌠a future galaxy-wide civilization⌠seem[ing] too âwildâ to take seriously⌠we live in a wild time, and should be ready for anything⌠This thesis has a wacky, sci-fi feel.â
(Cremer points to this as an example of an âoften-incomprehensible fantasy about the futureâ)