As MakoYass pointed out this sounds a lot like you are suggesting halting all acquisition of knowledge. While it would handily stop human-created existential risk, I do not think this is possible to implement (as you note, but don’t go into how to address). It’s sadly another example of solutions like “develop a global culture of coexistence” which would work, but are not practical.
Your post made me think, and I thoroughly applaud the audacity to conceive “unthinkable” directions of inquiry. It made me reconsider my preconceptions! But I think there are some simple reasons this kind of solution isn’t more prevalent—with the notable exception of scrupulous work to avoid information hazards. (did you EA is already working to prevent information hazards?)
Counterpoints I would like to see you address: Its hard to know what we don’t know, to avoid discovering it. Information is extremely useful. Information can eliminate risks as well as create them. Historically we have suppressed knowledge incorrectly thinking it would harm us, and this caused great damage. Other people will discover it, so white hatting is preferable.
If I had to guess, this was downvoted for the wordiness. Your points are buried instead of stated at the top, requiring someone to puzzle through most of your post to reach them.
Another EA thing: your headers are hooks. This is normally good at drawing a reader in, but the EA forum prefers a summary/conclusion of each paragraph to make it easily assessed—if someone agrees with your point already they can skip it and move to the next paragraph. If they disagree they can read the paragraph and be convinced by your evidence.
For the first 8 sections it sounds like you are suggesting 1) great powers create great risks 2) information creates great powers 3) therefore [no additional information acquisition] to stop creating great risks. Its only in your 9th one that you acknowledge that its impossible, but then don’t really provide any direction. What do we do given this knowledge? What does this change?
Your conclusions are situated right in between practical folks (who already know knowledge is sometimes bad for society) and theoreticians (who think knowledge is always good; it’s the wielding that is the problem) so you might have gotten downvoted by both. It also isn’t favorable to the silicon valley pro-progress culture of “move fast and break things” in every area except existential risk. On the other hand, the average person is too suspicious of scientific progress in my opinion, so I want to encourage a pro-science view generally. I’m pretty far on the side of advancing as fast as we can.
Okay, wordiness: Compare your version with my pared down version:
We don’t have the option of turning our backs on knowledge.
The solution to obesity is obviously not to stop eating. Instead we must develop a more sophisticated relationship with food, eating only what is good for our bodies.
The “more is better” relationship with knowledge which has served us so well for so long must now make way for a more sophisticated relationship involving complicated cost/benefit calculations. This will sometimes involve saying no to some new knowledge.
Nuclear weapons prove that the simplistic “more is better” requires updating to meet the existential threats presented by a revolutionary new era.
Obviously I’m hardly one to speak. Just look at how long my comment has gotten. But I think cutting to the meat of your posts will go a long way to making them stronger. I hope this has been helpful feedback.
Unfortunately I have spend too long on this and must go back to spending time reading about biodiversity and ecosystems. :)
As MakoYass pointed out this sounds a lot like you are suggesting halting all acquisition of knowledge. While it would handily stop human-created existential risk, I do not think this is possible to implement (as you note, but don’t go into how to address). It’s sadly another example of solutions like “develop a global culture of coexistence” which would work, but are not practical.
Your post made me think, and I thoroughly applaud the audacity to conceive “unthinkable” directions of inquiry. It made me reconsider my preconceptions! But I think there are some simple reasons this kind of solution isn’t more prevalent—with the notable exception of scrupulous work to avoid information hazards. (did you EA is already working to prevent information hazards?)
Counterpoints I would like to see you address:
Its hard to know what we don’t know, to avoid discovering it.
Information is extremely useful.
Information can eliminate risks as well as create them.
Historically we have suppressed knowledge incorrectly thinking it would harm us, and this caused great damage.
Other people will discover it, so white hatting is preferable.
If I had to guess, this was downvoted for the wordiness. Your points are buried instead of stated at the top, requiring someone to puzzle through most of your post to reach them.
Another EA thing: your headers are hooks. This is normally good at drawing a reader in, but the EA forum prefers a summary/conclusion of each paragraph to make it easily assessed—if someone agrees with your point already they can skip it and move to the next paragraph. If they disagree they can read the paragraph and be convinced by your evidence.
For the first 8 sections it sounds like you are suggesting 1) great powers create great risks 2) information creates great powers 3) therefore [no additional information acquisition] to stop creating great risks. Its only in your 9th one that you acknowledge that its impossible, but then don’t really provide any direction. What do we do given this knowledge? What does this change?
Your conclusions are situated right in between practical folks (who already know knowledge is sometimes bad for society) and theoreticians (who think knowledge is always good; it’s the wielding that is the problem) so you might have gotten downvoted by both. It also isn’t favorable to the silicon valley pro-progress culture of “move fast and break things” in every area except existential risk. On the other hand, the average person is too suspicious of scientific progress in my opinion, so I want to encourage a pro-science view generally. I’m pretty far on the side of advancing as fast as we can.
Okay, wordiness:
Compare your version with my pared down version:
Obviously I’m hardly one to speak. Just look at how long my comment has gotten. But I think cutting to the meat of your posts will go a long way to making them stronger. I hope this has been helpful feedback.
Unfortunately I have spend too long on this and must go back to spending time reading about biodiversity and ecosystems. :)