The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by x points every y years.
My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It’s sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks “what if nukes but cost a dollar and fit in your pocket?”, whereas LOMS goes all the way to “the price and size of nukes is in fact dropping”.
I also think that the LOMS is vague and imprecise.
I’m basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.
Are x (step size) and y (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn’t have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued.
With or without a proof of 1′s falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of?
Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement?
Is the LOMS sensitive to population size, or relative consumption of different proportions of the population?
For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord’s gesture at “existential security” entails). How about merely reversing it’s direction, or mere mitigation?
My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form “the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff” on the table in any way?
While I’m not 100% sure there will be actionable insights downstream of these open problems, it’s plausibly worth researching.
open problems in the law of mad science
The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by x points every y years.
My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It’s sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks “what if nukes but cost a dollar and fit in your pocket?”, whereas LOMS goes all the way to “the price and size of nukes is in fact dropping”.
I also think that the LOMS is vague and imprecise.
I’m basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.
Are x (step size) and y (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn’t have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued.
With or without a proof of 1′s falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of?
Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement?
Is the LOMS sensitive to population size, or relative consumption of different proportions of the population?
For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord’s gesture at “existential security” entails). How about merely reversing it’s direction, or mere mitigation?
My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form “the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff” on the table in any way?
While I’m not 100% sure there will be actionable insights downstream of these open problems, it’s plausibly worth researching.
As far as I know, this is the original attribution.