I don’t think public discourse around this is a good idea. Same as the reports on nuclear weapons trying to demonstrate that a nuclear exchange ‘wouldn’t be that bad’ or publicly wondering in detailed ways about why copycat attacks of certain kinds aren’t more common.
I think this is a great point, and I think it’s alright here.
The part of the series that I’m about to go into (as of today) contains a panoply of possible explanations for the apparent paradox, none of which are “it wouldn’t actually be that bad” or “it just hasn’t occurred to anyone.”
And this series is pretty mild in terms of the large body of “are we prepared against X type of bioweapon attack” literature and analyses out there (for which the answer is usually “no, we are not prepared).
The series is also more theoretical than concrete, which seems like it should reduce the risk factor.
Does this hit on your concern? If you still have your concerns or have different ones, I’m interested to know.
Thanks for the substantive engagement even though I was pretty terse on justification. I’m less concerned when I see engagement with differential infohazard analysis (i.e. some parts of this might have problems and some might not). I still feel a sense of caution about EA getting involved in this area given its poor track record of taking into account existing best practices/chesterton fences.
+1 for comparing it to existing works in the area to help reason about this.
I don’t think public discourse around this is a good idea. Same as the reports on nuclear weapons trying to demonstrate that a nuclear exchange ‘wouldn’t be that bad’ or publicly wondering in detailed ways about why copycat attacks of certain kinds aren’t more common.
I think this is a great point, and I think it’s alright here.
The part of the series that I’m about to go into (as of today) contains a panoply of possible explanations for the apparent paradox, none of which are “it wouldn’t actually be that bad” or “it just hasn’t occurred to anyone.”
And this series is pretty mild in terms of the large body of “are we prepared against X type of bioweapon attack” literature and analyses out there (for which the answer is usually “no, we are not prepared).
The series is also more theoretical than concrete, which seems like it should reduce the risk factor.
Does this hit on your concern? If you still have your concerns or have different ones, I’m interested to know.
Thanks for the substantive engagement even though I was pretty terse on justification. I’m less concerned when I see engagement with differential infohazard analysis (i.e. some parts of this might have problems and some might not). I still feel a sense of caution about EA getting involved in this area given its poor track record of taking into account existing best practices/chesterton fences.
+1 for comparing it to existing works in the area to help reason about this.