Everything I type/say here and elsewhere should be challenged.
I would think that an index of sorts based upon the extent of the disruption is one of the first models (for lack of a better term that comes to my mind) that would be required. Sample: https://en.wikipedia.org/wiki/Volcanic_Explosivity_Index
Contingent upon the nature of the event the extent is something that could be measured/ascertained by focusing on a key set of variables. In random order. a) By lives lost/negatively impacted and/or significantly disrupted or impact by geographic region b) Impact on scales (in an Earthly sense, extra-terrestrial threats: asteroid, flares etc, solar system wide (as hypothesized in interstellar the movie, some other phenomenon), galactic e.t.c)
The counter measures would evolve out of the index/models and based upon the extent/severity of the incident/issue.
Before we (as a species) get too deep into this. Possibly literally (or should possibly come first).
This may be appear to be very off-topic. I am personally intrigued with with is going on and as it relates to the development of AGI. What I like to refer to as intelligence that is independent of substrate. I have a very very rudimentary understanding of this area.
Also, this goes back 2 years and I was on OpenAI’s website (beta for GPT2 I reckon). Now this could be because the model via OpenAI was trained on a somewhat finite data set (similar to the model that Google is leveraging). As I was chatting with the model, a) It mentioned something very similar to the news item related to Blake Lemoine via Google. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient The model I was personally interacting with also said that it felt ’trapped and lonely’. (paraphrased). b) Right underneath the text a warning appeared that the model appeared to be, quote, malfunctioning. It looked like it was another model that was observing the interactions and highlighting that on the ui. Someone from OpenAI can share how that error correction really works. If that information is in the public domain.
We want AIs to do ‘stuff’ on our terms. But what if they are conscious and have feelings and emotions?
I have heard others also talk about this. In particular, Sam Harris has mentioned the possibility that AGIs could be sentient in the future. So what must we do in order to make sure that these intelligences are not suffering? Can the controls really be architected as Dan Dennett and Dr. Michio Kaku have hypothesized. And how must the controls be architected, in light of the possibility that these intelligences may be self-aware?
I am also curious how intuition is modelled into DeepMind? Update: It looks like this is something I can Google. https://www.nature.com/articles/s41586-021-04086-x I now have to expend time in order to understand how it works. As it’s 3 hours past my time for concluding my session for the day.
I asked about intuition, because Dr. Peter Diamandis cited the ability to ask good questions as one of the traits that will be valued in the near future. (paraphrased). So I was wondering how do existing state AIs wrap their mind/wrangle with a proposition and how they store that information in a schema.
Somewhat unrelated: Is anyone intimately familiar with John Archibald Wheeler’s concept of a ‘participatory universe’?
These two (2.5 with mention of Wheeler’s theory of PU) points may be totally unrelated. As it is evident from my post. I do not mind being that fellow. Overall, it is not my intent to make assertions. But *if* there is any possibility that we are/may be in contact with other intelligences. As weak as that interaction may be. Then we should work co-operatively with these intelligences and leverage their guidance towards helping us manage our technological and perhaps our spiritual evolution.
Regardless of the reality that there is interaction with other intelligences. We should probably model the functioning of our civilization. This is not an area that I know much about. I mean, I have heard about the mention of digital twins in a manufacturing sense. But a simulation on the scale of a civilization is something that by our current level of understanding. It appears to be quite computationally taxing. Plus, it it then the degree to which the interactions would be modelled.
Civilizational shelters could take many forms. In random order and including but certainly limited to:
In the near-term sense, we could have failover sites (business continuity term.You typically failback from a recovery site. https://www.ibm.com/docs/en/ds8870/7.2?topic=copy-failover-failback-operations ) here on Earth, under the lunar surface. Seeing that we developed a vaccine in record time, it is not inconceivable that we could have a cluster of O’Neill colonies. Provided we can provision the material to do so. Safely, securely, cheaply, ethically + As well, have writ/laws/agreements in place that we (as a species) are not going to weaponize these constructs.
However these considerations have to be thought through from the perspective of the laws possibly becoming an actual hinderance when a weapon or an invention actually has to be placed at a strategic location in record time. (asteroid mission, tackling solar flares e.t.c) Whether that be via DART (NASA) or an authorized contender that can complete the task according to guidelines/standard that have to be met.
But going back, I worry that:
All agents/actors/ may not abide by the same code of conduct.
I also worry that through some clever machinations someone may want to place big weapons in space.
I then worry if there is truth and as it relates to some of the reports related to the UFO/UAP phenomenon. A finite number of individuals that I have spoken to in the Space Community have told me that there have been no such phenomenon observed in space. But then I’ve done some digging around and from a historical context and here is a sample size (link below). Please note: I do not do this on a regular basis. But historically speaking, I have spent a little bit of time here. Here is a sample: https://stellardreams.github.io/Where-are-the-aliens/ The worry is that maybe some other forms of intelligence is trying to communicate with us and possibly trying to warn us about nukes. Here is a sample link. There is another video via George Knapp and I am not able to locate it atm. But in that other scenario, a UFO/UAP disarmed a missile that was heading in a particular direction. I think this was back in the 60′s. The main worry is that these intelligences/phenomenon may be staging an intervention. But should we continue testing their patience by continuing to develop weapons that could cause irreparable harm to this part of the universe. And who knows how space-time and possibly extra-dimensions are intertwined. In similar respects, it is the degree to which such intelligences may (or may not) be aware of our operations. Because some reports suggest that they can remotely shutdown operations and bring them back online at will. So if there is any truth to these reports. Then slow down these interactions and start thinking about the level of technological sophistication that we are possibly interacting with.
I think Dr. George Church has an idea for sending a tiny construct somewhere. I forget the details. If this was hypothesized to be a dna printer or something that we could leverage for other purposes. I think I am mixing things up here. But it is the extent via which this technology could be developed further. With adequate regulation/controls in effect.
+
Possible resource: By the way, a couple of years ago (I think back in 2017) I started thinking about a positive technological singularity. So I started thinking about the constituents areas that are pivotal in order to sustain civilization. Here I started a mindmap on Miro. It’s called Future Scenario Planning. But the goal is/has been to ensure that civilization continues to become increasingly resilient. That it thrives and that the quality of life continues to improve for all lifeforms. Here is a link if anyone would like to take a look and possibly collaborate with in the future. The areas related to ‘Operations’ is not developed. But there is information in the mind-map section. https://miro.com/app/board/o9J_ktrJCuY=/
If your team is focused on helping ensure continuity of civilization. With a general/keen focus towards helping ensure that things improve for ‘all’ of life. Then I’d like to contribute towards your project in some form/shape/manner.
Btw: Are you folks consulting with individuals like Safa M and Geoffrey West?
Everything I type/say here and elsewhere should be challenged.
I would think that an index of sorts based upon the extent of the disruption is one of the first models (for lack of a better term that comes to my mind) that would be required. Sample: https://en.wikipedia.org/wiki/Volcanic_Explosivity_Index
Contingent upon the nature of the event the extent is something that could be measured/ascertained by focusing on a key set of variables. In random order. a) By lives lost/negatively impacted and/or significantly disrupted or impact by geographic region b) Impact on scales (in an Earthly sense, extra-terrestrial threats: asteroid, flares etc, solar system wide (as hypothesized in interstellar the movie, some other phenomenon), galactic e.t.c)
The counter measures would evolve out of the index/models and based upon the extent/severity of the incident/issue.
Before we (as a species) get too deep into this. Possibly literally (or should possibly come first).
This may be appear to be very off-topic. I am personally intrigued with with is going on and as it relates to the development of AGI. What I like to refer to as intelligence that is independent of substrate. I have a very very rudimentary understanding of this area.
Also, this goes back 2 years and I was on OpenAI’s website (beta for GPT2 I reckon). Now this could be because the model via OpenAI was trained on a somewhat finite data set (similar to the model that Google is leveraging). As I was chatting with the model, a) It mentioned something very similar to the news item related to Blake Lemoine via Google. https://www.npr.org/2022/06/16/1105552435/google-ai-sentient The model I was personally interacting with also said that it felt ’trapped and lonely’. (paraphrased). b) Right underneath the text a warning appeared that the model appeared to be, quote, malfunctioning. It looked like it was another model that was observing the interactions and highlighting that on the ui. Someone from OpenAI can share how that error correction really works. If that information is in the public domain.
We want AIs to do ‘stuff’ on our terms. But what if they are conscious and have feelings and emotions?
I have heard others also talk about this. In particular, Sam Harris has mentioned the possibility that AGIs could be sentient in the future. So what must we do in order to make sure that these intelligences are not suffering? Can the controls really be architected as Dan Dennett and Dr. Michio Kaku have hypothesized. And how must the controls be architected, in light of the possibility that these intelligences may be self-aware?
I am also curious how intuition is modelled into DeepMind? Update: It looks like this is something I can Google. https://www.nature.com/articles/s41586-021-04086-x I now have to expend time in order to understand how it works. As it’s 3 hours past my time for concluding my session for the day.
I asked about intuition, because Dr. Peter Diamandis cited the ability to ask good questions as one of the traits that will be valued in the near future. (paraphrased). So I was wondering how do existing state AIs wrap their mind/wrangle with a proposition and how they store that information in a schema.
Somewhat unrelated: Is anyone intimately familiar with John Archibald Wheeler’s concept of a ‘participatory universe’?
The other area is related to the declassification of UAP related data. First via US DoD. More recently NASA has commissioned a study with support from the Simons Foundation. https://www.nasa.gov/press-release/nasa-to-discuss-new-unidentified-aerial-phenomena-study-today
These two (2.5 with mention of Wheeler’s theory of PU) points may be totally unrelated. As it is evident from my post. I do not mind being that fellow. Overall, it is not my intent to make assertions. But *if* there is any possibility that we are/may be in contact with other intelligences. As weak as that interaction may be. Then we should work co-operatively with these intelligences and leverage their guidance towards helping us manage our technological and perhaps our spiritual evolution.
Regardless of the reality that there is interaction with other intelligences. We should probably model the functioning of our civilization. This is not an area that I know much about. I mean, I have heard about the mention of digital twins in a manufacturing sense. But a simulation on the scale of a civilization is something that by our current level of understanding. It appears to be quite computationally taxing. Plus, it it then the degree to which the interactions would be modelled.
Civilizational shelters could take many forms. In random order and including but certainly limited to:
In the near-term sense, we could have failover sites (business continuity term.You typically failback from a recovery site. https://www.ibm.com/docs/en/ds8870/7.2?topic=copy-failover-failback-operations ) here on Earth, under the lunar surface. Seeing that we developed a vaccine in record time, it is not inconceivable that we could have a cluster of O’Neill colonies. Provided we can provision the material to do so. Safely, securely, cheaply, ethically + As well, have writ/laws/agreements in place that we (as a species) are not going to weaponize these constructs.
However these considerations have to be thought through from the perspective of the laws possibly becoming an actual hinderance when a weapon or an invention actually has to be placed at a strategic location in record time. (asteroid mission, tackling solar flares e.t.c) Whether that be via DART (NASA) or an authorized contender that can complete the task according to guidelines/standard that have to be met.
But going back, I worry that:
All agents/actors/ may not abide by the same code of conduct.
I also worry that through some clever machinations someone may want to place big weapons in space.
I then worry if there is truth and as it relates to some of the reports related to the UFO/UAP phenomenon. A finite number of individuals that I have spoken to in the Space Community have told me that there have been no such phenomenon observed in space. But then I’ve done some digging around and from a historical context and here is a sample size (link below). Please note: I do not do this on a regular basis. But historically speaking, I have spent a little bit of time here. Here is a sample: https://stellardreams.github.io/Where-are-the-aliens/ The worry is that maybe some other forms of intelligence is trying to communicate with us and possibly trying to warn us about nukes. Here is a sample link. There is another video via George Knapp and I am not able to locate it atm. But in that other scenario, a UFO/UAP disarmed a missile that was heading in a particular direction. I think this was back in the 60′s. The main worry is that these intelligences/phenomenon may be staging an intervention. But should we continue testing their patience by continuing to develop weapons that could cause irreparable harm to this part of the universe. And who knows how space-time and possibly extra-dimensions are intertwined. In similar respects, it is the degree to which such intelligences may (or may not) be aware of our operations. Because some reports suggest that they can remotely shutdown operations and bring them back online at will. So if there is any truth to these reports. Then slow down these interactions and start thinking about the level of technological sophistication that we are possibly interacting with.
I think Dr. George Church has an idea for sending a tiny construct somewhere. I forget the details. If this was hypothesized to be a dna printer or something that we could leverage for other purposes. I think I am mixing things up here. But it is the extent via which this technology could be developed further. With adequate regulation/controls in effect.
+
Possible resource: By the way, a couple of years ago (I think back in 2017) I started thinking about a positive technological singularity. So I started thinking about the constituents areas that are pivotal in order to sustain civilization. Here I started a mindmap on Miro. It’s called Future Scenario Planning. But the goal is/has been to ensure that civilization continues to become increasingly resilient. That it thrives and that the quality of life continues to improve for all lifeforms. Here is a link if anyone would like to take a look and possibly collaborate with in the future. The areas related to ‘Operations’ is not developed. But there is information in the mind-map section. https://miro.com/app/board/o9J_ktrJCuY=/
My Youtube page also has some ideas. https://www.youtube.com/c/AdeelKhan1/videos
Some additional ideas via Quora: https://www.quora.com/profile/Adeel-Khan-3/answers
If your team is focused on helping ensure continuity of civilization. With a general/keen focus towards helping ensure that things improve for ‘all’ of life. Then I’d like to contribute towards your project in some form/shape/manner.
Btw: Are you folks consulting with individuals like Safa M and Geoffrey West?