I actually downrate 1 as a risk, due to different views on what genetic engineering is for. Yes, it will be used first for health, but ultimately it will be for enhancement, for good or ill. And my stance is EV maximization, not the Do No Harm principle from doctors and bioethicists, which makes for some different views on citizen science.
I have a wavy relationship with inequality reducing effects, in that long-term, I view inequality as neither intrinsically good or bad, which is going to lead to massive inequality in the future due to genetic enhancement assuming freedom is there in any significant way, which is something I don’t care too much for.
I disagree, primarily because my goal isn’t to restore nature, but to have animals and people.
I actually think that the risks here are real, but not enough to outweigh the positives for governance.
While I have presented an admittedly biased case here, I do suspect there’s a risk of disaster that you don’t mention, which makes me worried about democratizing gene editing:
Global Catastrophic Risk/Existential risk potential worries me, though not to the extent that AI has, is still very worrying. (In fact had AI not been raced to, I’d almost certainly place this as the top worrying technology). The problem is any serious genetic engineering capabilities also imply easy to make bioweapons, and unless there are ways to control that or our population is in space, this could present a massive risk to civilization.
Thanks for the collaboration. I have some discussion points and questions in response.
What is EV maximization? Also, I think long-term you are correct, but short-term the technology really isn’t that developed or efficient yet, and there seems to be mounting pressure to progress at a rate that isn’t in sync with regulations or technology.
I understand where you’re coming from, however, I think inequality can be considered inherently chaotic, since it usually produces societal unrest. Human progress slows under conditions of unrest, so reducing inequality is important in terms of global peace and advancement.
What statement are you disagreeing with?
What positives do you think using gene editing in governance could bring?
I entirely agree with the risk of using genetically-edited bioweapons- the only reason I didn’t mention it is because this is a paper for Open Philanthropy Cause Area suggestions, and OP already has a “bioweapons/pandemic preparedness” cause area. I think a lot of the solutions would be the same for “natural” pandemics.
I’ll respond here, but basically my following points are:
Expected Value maximization basically state that if you want maximum reward from something, you have to be willing to accept some risk for massive rewards. Heuristics that can lead you to the goal are:
Unproven genetic engineering citizen science are more valuable than you think, though it can be tricky to know the distribution of genetic edits. If they’re a normal distribution, than safety is more favored. If they’re power-law distributed, than risk taking is more favorable.
But let’s go on to my next point:
I agree with you on reflection here.
This is admittedly orthogonal to my concerns, but my biggest disagreement is with the ecologist’s statement that “Ecologists are likely to be a demanding audience: a resurrected mammoth counts as mammoth only if it looks like a mammoth and consumes, defecates, tramples and migrates like a mammoth.”
Some thoughts on the risk paper:
I actually downrate 1 as a risk, due to different views on what genetic engineering is for. Yes, it will be used first for health, but ultimately it will be for enhancement, for good or ill. And my stance is EV maximization, not the Do No Harm principle from doctors and bioethicists, which makes for some different views on citizen science.
I have a wavy relationship with inequality reducing effects, in that long-term, I view inequality as neither intrinsically good or bad, which is going to lead to massive inequality in the future due to genetic enhancement assuming freedom is there in any significant way, which is something I don’t care too much for.
I disagree, primarily because my goal isn’t to restore nature, but to have animals and people.
I actually think that the risks here are real, but not enough to outweigh the positives for governance.
While I have presented an admittedly biased case here, I do suspect there’s a risk of disaster that you don’t mention, which makes me worried about democratizing gene editing:
Global Catastrophic Risk/Existential risk potential worries me, though not to the extent that AI has, is still very worrying. (In fact had AI not been raced to, I’d almost certainly place this as the top worrying technology). The problem is any serious genetic engineering capabilities also imply easy to make bioweapons, and unless there are ways to control that or our population is in space, this could present a massive risk to civilization.
Hi Sharmake,
Thanks for the collaboration. I have some discussion points and questions in response.
What is EV maximization? Also, I think long-term you are correct, but short-term the technology really isn’t that developed or efficient yet, and there seems to be mounting pressure to progress at a rate that isn’t in sync with regulations or technology.
I understand where you’re coming from, however, I think inequality can be considered inherently chaotic, since it usually produces societal unrest. Human progress slows under conditions of unrest, so reducing inequality is important in terms of global peace and advancement.
What statement are you disagreeing with?
What positives do you think using gene editing in governance could bring?
I entirely agree with the risk of using genetically-edited bioweapons- the only reason I didn’t mention it is because this is a paper for Open Philanthropy Cause Area suggestions, and OP already has a “bioweapons/pandemic preparedness” cause area. I think a lot of the solutions would be the same for “natural” pandemics.
I’ll respond here, but basically my following points are:
Expected Value maximization basically state that if you want maximum reward from something, you have to be willing to accept some risk for massive rewards. Heuristics that can lead you to the goal are:
Unproven genetic engineering citizen science are more valuable than you think, though it can be tricky to know the distribution of genetic edits. If they’re a normal distribution, than safety is more favored. If they’re power-law distributed, than risk taking is more favorable.
But let’s go on to my next point:
I agree with you on reflection here.
This is admittedly orthogonal to my concerns, but my biggest disagreement is with the ecologist’s statement that “Ecologists are likely to be a demanding audience: a resurrected mammoth counts as mammoth only if it looks like a mammoth and consumes, defecates, tramples and migrates like a mammoth.”
Unfortunately, I don’t have one ready right now.
Here’s my response to you.