Good to see nanotech and APM finally get some attention from the EA experts after 10 (15?) years of neglect!
We at NanoLab ( http://nanolabvr.com/ ) stand firmly behind an aggressive (but of course risk-aware) strategy of implementing APM for the benefit of humanity quickly and safely.
I want also to emphasize that the Bottleneck analysis report is top-notch and is probably a required reading for anyone interested in the topic (unless you already know all the background and who did what when), since it’s up to date and very thorough.
I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful.
And I also think it seems bad to “stand firmly behind” any “aggressive strategy” for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should probably always explicitly maintain some uncertainty about that and some openness to changing one’s mind.
I’d be open to debating this further, but I think basically I just agree with what’s stated in the post and I’m not sure which specific points you disagree with or would add. (It seems clear that you see the risks as lower and/or see the benefits as higher, but I’m not sure why.) Perhaps if I hear what you disagree with or would add, I could see if that changes my views or if I then have useful counterpoints tailored to your views.
(Though it’s also plausible I won’t have time for such a debate, and in any case some/many other people know more about this topic than me.)
Though I am disappointed by the thrust of the author. Nanotech may be important, therefore longtermist EAs should not work on it, should not talk about it and should only study it in secret, getting paid through some EA foundation to just sit and “strategize” about its risks. Improving lives of billions of people with APM/nanotech is not valuable, saving billions of lives is not valuable, increasing man’s power over matter is not valuable, preventing civilizational collapse due to resource depletion/climate change is not valuable.
I am starting to think that longtermism may indeed be a cognitive cancer that is consuming parts of EA and transhumanism. Let’s hope I am not put on some kill list by well-meaning longtermists for this comment...
I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I’m happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they’re getting downvoted.
But in brief:
I thought this was a misleading/inaccurate and uncharitable reading of the post
I think that the “kill list” part of your comment feels wildly over-the-top/hyperbolic
Perhaps you meant it as light-hearted or a joke or something, but I think it’s not obvious that that’s the case without hearing your tone
I think it’s just in general clearly not conducive to good discussion for someone to in any way imply their conversational partners may put them on a kill list—that’s not a good way to start a productive debate where both sides are open to and trying to learn from each other and see if they want to change their views
Less importantly, I also disagree with your view that it’s a good move at the moment to try to speed up advanced nanotechnology development.
But if you just stated you have that view, I’d probably not downvote and instead just leave a comment disagreeing.
And that’d certainly be the case if you stated that view but also indicated an openness to having your view changed (as I believe the post did), explained why you have your view in a way that sounds intended to inform rather than persuade, and ideally also attempted to summarise your understanding of the post’s argument or where you disagree with it. I think that’s a much better way to have a productive discussion.
For that reason, I didn’t downvote the parent comment, even though my current guess is that the strategy you’re endorsing there is a bad one from the perspective of safeguarding and improving the world & future.
Good to see nanotech and APM finally get some attention from the EA experts after 10 (15?) years of neglect!
We at NanoLab ( http://nanolabvr.com/ ) stand firmly behind an aggressive (but of course risk-aware) strategy of implementing APM for the benefit of humanity quickly and safely.
I want also to emphasize that the Bottleneck analysis report is top-notch and is probably a required reading for anyone interested in the topic (unless you already know all the background and who did what when), since it’s up to date and very thorough.
I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful.
And I also think it seems bad to “stand firmly behind” any “aggressive strategy” for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should probably always explicitly maintain some uncertainty about that and some openness to changing one’s mind.
I’d be open to debating this further, but I think basically I just agree with what’s stated in the post and I’m not sure which specific points you disagree with or would add. (It seems clear that you see the risks as lower and/or see the benefits as higher, but I’m not sure why.) Perhaps if I hear what you disagree with or would add, I could see if that changes my views or if I then have useful counterpoints tailored to your views.
(Though it’s also plausible I won’t have time for such a debate, and in any case some/many other people know more about this topic than me.)
Just noting that the Bottleneck analysis report is written in the first person, but I can’t see a name attached to it anywhere! Who is the author?
Adam Marblestone
😍
Though I am disappointed by the thrust of the author. Nanotech may be important, therefore longtermist EAs should not work on it, should not talk about it and should only study it in secret, getting paid through some EA foundation to just sit and “strategize” about its risks. Improving lives of billions of people with APM/nanotech is not valuable, saving billions of lives is not valuable, increasing man’s power over matter is not valuable, preventing civilizational collapse due to resource depletion/climate change is not valuable.
I am starting to think that longtermism may indeed be a cognitive cancer that is consuming parts of EA and transhumanism. Let’s hope I am not put on some kill list by well-meaning longtermists for this comment...
I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I’m happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they’re getting downvoted.
But in brief:
I thought this was a misleading/inaccurate and uncharitable reading of the post
I think that the “kill list” part of your comment feels wildly over-the-top/hyperbolic
Perhaps you meant it as light-hearted or a joke or something, but I think it’s not obvious that that’s the case without hearing your tone
I think it’s just in general clearly not conducive to good discussion for someone to in any way imply their conversational partners may put them on a kill list—that’s not a good way to start a productive debate where both sides are open to and trying to learn from each other and see if they want to change their views
Less importantly, I also disagree with your view that it’s a good move at the moment to try to speed up advanced nanotechnology development.
But if you just stated you have that view, I’d probably not downvote and instead just leave a comment disagreeing.
And that’d certainly be the case if you stated that view but also indicated an openness to having your view changed (as I believe the post did), explained why you have your view in a way that sounds intended to inform rather than persuade, and ideally also attempted to summarise your understanding of the post’s argument or where you disagree with it. I think that’s a much better way to have a productive discussion.
For that reason, I didn’t downvote the parent comment, even though my current guess is that the strategy you’re endorsing there is a bad one from the perspective of safeguarding and improving the world & future.
As a moderator, I agree with Michael. The comment Michael’s replying to goes against Forum norms.