Hi Lizka—really enjoyed the article. Often AI development seems to be discussed largely through the paradigm of ‘do we speed up to achieve radical transformation, or do we slowdown to reduce risk’. Hadn’t thought in depth about before about the notion of aiming to speed up certain components, while slowing others to better manage the transition.
One thought as you go into your more detailed analyses. While you view epistemic first transformation as preferential, you nod to the risk of an epistemic first transformation leading to the risk of being leveraged to mislead as opposed to enlighten the world. Another risk which sprang to mind with an epistemic first transformation is also that it becomes an immensely powerful tool for those with first access (and it feels fairly reasonable to imagine that any major transformation would likely be accessed by government before being implemented in broader society). If a government with authoritarian tendency had first-access to epistemic architecture which enabled superior strategic insight a substantial risk pathway would be that it could be used to inform permanent power capture/authoritarian lock in (even without being leveraged to misinform citizens/public). The technology and impact could in theory never become public.
I’m also worried about an “epistemics” transformation going poorly, and agree that how it goes isn’t just a question of getting the right ~”application shape” — something like differential access/adoption[1] matters here, too.
@Owen Cotton-Barratt, @Oliver Sourbut, @rosehadshar and I have been thinking a bit about these kinds of questions, but not as much as I’d like (there’s just not enough time). So I’d love to see more serious work on things like “what might it look for our society to end up with much better/worse epistemic infrastructure (and how might we get there)?” and “how can we make sure AI doesn’t end up massively harming our collective ability to make sense of the world & coordinate (or empower bad actors in various ways, etc.).
Basically +1 here. I guess some relevant considerations are the extent to which a tool can act as antidote to its own (or related) misuse—and under what conditions of effort, attention, compute, etc. If that can be arranged, then ‘simply’ making sure that access is somewhat distributed is a help. On the other hand, it’s conceivable that compute advantages or structural advantages could make misuse of a given tech harder to block, in which case we’d want to know that (without, perhaps, broadcasting it indiscriminately) and develop responses. Plausibly those dynamics might change nonlinearly with the introduction of epistemic/coordination tech of other kinds at different times.
In theory, it’s often cheaper and easier to verify the properties of a proposal (‘does it concentrate power?’) than to generate one satisfying given properties, which gives an advantage to a defender if proposals and activity are mostly visible. But subtlety and obfuscation and misdirection can mean that knowing what properties to check for is itself a difficult task, tilting the other way.
Likewise, narrowly facilitating coordination might produce novel collusion with substantial negative externalities on outsiders. But then ex hypothesi those outsiders have an outsized incentive to block that collusion, if only they can foresee it and coordinate in turn.
Hi Lizka—really enjoyed the article. Often AI development seems to be discussed largely through the paradigm of ‘do we speed up to achieve radical transformation, or do we slowdown to reduce risk’. Hadn’t thought in depth about before about the notion of aiming to speed up certain components, while slowing others to better manage the transition.
One thought as you go into your more detailed analyses. While you view epistemic first transformation as preferential, you nod to the risk of an epistemic first transformation leading to the risk of being leveraged to mislead as opposed to enlighten the world. Another risk which sprang to mind with an epistemic first transformation is also that it becomes an immensely powerful tool for those with first access (and it feels fairly reasonable to imagine that any major transformation would likely be accessed by government before being implemented in broader society). If a government with authoritarian tendency had first-access to epistemic architecture which enabled superior strategic insight a substantial risk pathway would be that it could be used to inform permanent power capture/authoritarian lock in (even without being leveraged to misinform citizens/public). The technology and impact could in theory never become public.
I’m also worried about an “epistemics” transformation going poorly, and agree that how it goes isn’t just a question of getting the right ~”application shape” — something like differential access/adoption[1] matters here, too.
@Owen Cotton-Barratt, @Oliver Sourbut, @rosehadshar and I have been thinking a bit about these kinds of questions, but not as much as I’d like (there’s just not enough time). So I’d love to see more serious work on things like “what might it look for our society to end up with much better/worse epistemic infrastructure (and how might we get there)?” and “how can we make sure AI doesn’t end up massively harming our collective ability to make sense of the world & coordinate (or empower bad actors in various ways, etc.).
This comment thread on an older post touched on some related topics, IIRC
Basically +1 here. I guess some relevant considerations are the extent to which a tool can act as antidote to its own (or related) misuse—and under what conditions of effort, attention, compute, etc. If that can be arranged, then ‘simply’ making sure that access is somewhat distributed is a help. On the other hand, it’s conceivable that compute advantages or structural advantages could make misuse of a given tech harder to block, in which case we’d want to know that (without, perhaps, broadcasting it indiscriminately) and develop responses. Plausibly those dynamics might change nonlinearly with the introduction of epistemic/coordination tech of other kinds at different times.
In theory, it’s often cheaper and easier to verify the properties of a proposal (‘does it concentrate power?’) than to generate one satisfying given properties, which gives an advantage to a defender if proposals and activity are mostly visible. But subtlety and obfuscation and misdirection can mean that knowing what properties to check for is itself a difficult task, tilting the other way.
Likewise, narrowly facilitating coordination might produce novel collusion with substantial negative externalities on outsiders. But then ex hypothesi those outsiders have an outsized incentive to block that collusion, if only they can foresee it and coordinate in turn.
It’s confusing.