I wont soon forget the “there is no evidence that masks work unless you’re a healthcare worker” (roughly approximated) type statements. It can often be difficult to distinguish between dishonesty and incompetence; however, the guidance on masks was excessively unreasonable to be explained by ignorance alone and demonstrated the folly in carelessly deferring to experts (ie, even if they’re more intelligent or educated, they may be lying to you).
Ty
I wonder if the increased demand would result in increased prices and other people not being able to afford as much food? Perhaps the greater good for the population as a whole would be to focus on increasing the supply of food, if possible.
An AGI war machine is different than nuclear weapons in a few important ways: a) they risk blowback (and indeed existential blowback)--somewhat like biological WMDs (and this should provide common ground for transparancy and regulation), b) an AGI weapon is vastly more difficult to construct which is and should continue to buy time to develop cooperation that wasn’t available for the nuclear threat, and c) a MAD scenario may not occur as one AGI may be able to neutralize other AGIs without incurring a grave cost (the lack of relative short-term security MAD provides may incentivize cooperation). One could argue that the AGI threat may prove to be more like the trajectory global warming mitigation is on rather than nuclear weapons development in the sense that decades of tireless advocacy will lead the way towards increasing public awareness followed by prioritization of the highest level followed by an uncommonly high degree of multinational cooperation. All of which is to say, I suspect nuclear weapons development may not be the most instructive of comparisons.
”Finally, I want to return to the character of the Manhattan Project scientists. … Nevertheless, they were convinced by a mistake.”
This isn’t a comprehensive survey and there is a possibility that most of them, for what it’s worth, thought it was the intelligent course of action given the information available to them at the time or perhaps even with hindsight. As well, there is the possibility that Einstein and others were mistaken in thinking they made a mistake (such as, perhaps, when Einstein removed the cosmological constant from GR). If the US hadn’t taken the lead, there is the possibility that a nation such as the USSR may have eventually developed them first and utilized these weapons in a brutal empire building campaign. Appeals to authority, I feel, should be made very carefully.
I appreciate your philosophy being written in a manner that does not require decoding.
”I don’t think there is an objective morality. “
—If a person, such as myself, believes that the value we give to the pursuit of happiness and avoidance of pain is arbitrary (in the sense that we appear to be programmed to give worth to these emotionally attractive ideas for evolutionary survival purposes), then a foundation for objective morality is lost and any selfish or selfless behaviour is ultimately performed to induldge our comfortable delusions.
”I can’t scientifically explain my behavior.[5] I often feel like there are different parts of me fighting each other.[6] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone.”
- I think we’re ultimately controlled by our emotions. While beliefs do alter emotions, other factors may overpower them. For this reason, I suppose our behaviour can only, at best, roughly approximate our belief about what our behaviour ought to be (utilitarian or otherwise).
My view is that for anything reasonably consequential (ie potentially worth the time spent investigating), one should at least briefly probe before deferring as a) virtually everyone lies at least occasionally and b) popular opinions are often clearly dubious due to the inertia they carry within a group (even a group of experts) from other people deferring without investigating (this can result in evidence needing to be overwhelming to shift majority opinion and overcome this self-perpetuating cycle).
It puts the world in a tough spot when a nation like Russia brandishes nukes and we either have to call their bluff and risk nuclear warfare or back off which, among other undesirable consequences, incentivizes other nations to develop and brandish nukes themselves.
Improvements in technology will almost certainly make developing these weapons gradually easier over time (eg continued development of laser isotope enrichment techniques for uranium) and there is always some possibility of a major breakthrough. The goal of a world with zero nukes either acknowledged or hidden away as well as nations no longer possessing the means to quickly generate them is, I feel, extremely naive and I doubt diplomats who earn a living lying and being lied to on a daily basis would disagree (if they were being honest, that is). The allure of possessing nukes has perhaps recently been increased with the recognition of Ukraine as a country that disposed of its nuclear stockpile which likely permitted it to become the victim of a genocidal empire building campaign by its neighbour.
I think nations should take a more radical approach and heavily fund technologies conducive to countering nukes (eg anti-missile DEW defences on the ground and possibly in LEO) and detecting where they might be (eg sensor arrays for real time scanning of coastal waters where submarines might be).
~Ty
The major difference, at least for me, is that with religion you’re expected to believe in an oncoming superpower based on little more than the supposed credibility of those giving testimony whereas with superintelligent AI we can witness a god being incubated through our labor.
A person does not need a great amount of technical knowledge or a willingness to defer to experts to get the gist of the AI threat. A machine that can continually and autonomously improve its general problem solving ability will eventually render us potentially powerless. AI computers look like they could be those machines given their extremely rapid development as well as inherent advantages over humans (eg faster information transfer times compared to humans who require on substances like electrolytes and neurotransmitters to physically travel). It will not necessarily take a string of programming Einsteins to develop the necessary code for superintelligence as an evolutionary trial-and-error approach will eventually accomplish this.
Thanks for your response.
I checked out your website (including your FAQ where you point out the limits of storing food rather than focusing on the means to resiliently produce it) and I was wondering if you guys thought there might be some merit to strategic supplies of vegetable oil even if to only help buy several months of time for other operations to ramp up? A 55 gallon barrel of vegetable oil has ~2,100,000 calories, is edible for ~2 years, and—in order to prevent waste—could be sold and replaced after several months as it has industrial value (eg as biofuel).
Air purifiers with UVC are gimmicks. For safety reasons, the UVC light (which can take hours to kill/inactivate almost everything present) has to be emitted within the machine which means it will act upon pumped air that will be sent through a HEPA filter anyways (which filters at or above 99.7% of the most penetrating particle size of ~0.3 microns).
Intuitively, UVGI should possess substantial pathogen destroying potential. This study(1) states, “35% (106/304) of guinea pigs in the control group developed TB infection, and this was reduced to 14% (43/303) by ionizers, and to 9.5% (29/307) by UV lights (both p < 0.0001 compared with the control group).”
Far-UVC (safe for mammals but not particles and bacteria) that can economically bathe a room (top and bottom) is the holy grail and probably deserving of at least exploratory funding.
1) Escombe AR, Moore DAJ, Gilman RH, Navincopa M, Ticona E, Mitchell B, et al. Upper-room ultraviolet light and negative air ionization to prevent tuberculosis transmission. PLoS Med. 2009;6:e43 (https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1000043)