if this grant made even a 0.001 percent contribution to speeding up that race, which seems plausible then the grant could st theill be strongly net negative.
Suppose the grant made the race 0.001% faster overall, but made OpenAI 5% more focused on alignment. That seems like an amazingly good trade to me.
This is quite sensitive to the exact quantitative details and I think the speed up is likely way, way more than 0.001%.
I’m not sure what trade-off I would take, it depends how much difference “focus on alignment” within a capabilities focused org is likely to make to improving safety. This i would defer to people who are far more enmeshed in this ecosystem.
Instinctively I would put maybe a .1 percent speed up as a bigger net harm than a 5 percent focus on safety would be net good, but am so uncertain that i could even reverse that with a decent counterargument lol.
Suppose the grant made the race 0.001% faster overall, but made OpenAI 5% more focused on alignment. That seems like an amazingly good trade to me.
This is quite sensitive to the exact quantitative details and I think the speed up is likely way, way more than 0.001%.
I really like this line of argument nice one.
I’m not sure what trade-off I would take, it depends how much difference “focus on alignment” within a capabilities focused org is likely to make to improving safety. This i would defer to people who are far more enmeshed in this ecosystem.
Instinctively I would put maybe a .1 percent speed up as a bigger net harm than a 5 percent focus on safety would be net good, but am so uncertain that i could even reverse that with a decent counterargument lol.