My career has focused on deploying AI and data science into large corporations to solve difficult problems, with a focus on Earth-science related including energy, agriculture, and mining.
kpurens
>Bostrom says that if everyone could make nuclear weapons in their own home, civilization would be destroyed by default because terrorists, malcontent people and “folk who just want to see what would happen” would blow up most cities.
Yes, and what would the world be like to change this?
It’s terrible to think that the reason we are safe is because others are powerless. If EA seeks to maximize human potential, I think it’s really insightful that we are confident many people would destroy just because they can. And I think focusing on real well being of people is a way we can confront this.
Let’s do the thought experiment: What would the world look like where anyone had power to destroy the world at any time—and choose not to? Where no one made that choice?
What kind of care and support systems would exist? How would we respect each other? How would society be organized?
I think this is a good line of thinking because it helps understand how the world is vulnerable, and how we can make it less so.-Kristopher
This is a really good piece of input for predictions of how the supply-demand curve for coding will change in the future.
50% increase in time effectively reduces cost of coding by 50%. Depending on the shape of the supply-demand curve for coding, this could lead to high unemployment, or a boom for coders that leads to even higher demand.
Note: coding productivity tools developed over the past 40 years have led to ever-increasing demand since so much value is generated :)
It seems to me that consciousness is a different concept than intelligence, and one that isn’t well understood and communicated because it’s tough for us to differentiate them from inside our little meat-boxes!
We need better definitions of intelligence and consciousness; I’m sure someone is working on it, and so perhaps just finding those people and communicating their findings is an easy way to help?
I 100% agree that these things aren’t obvious—which is a great indicator that we should talk about them more!
I’m referring to the 2014 event which was a ‘weak’ version of the Turing test; since then, the people who were running yearly events have lost interest, and claims that that the Turing test is a ‘poor test of intelligence’—highlighting the way that goalposts seem to have shifted.
https://gizmodo.com/why-the-turing-test-is-bullshit-1588051412
Is GPT-4 an AGI?
One thing I have noticed is goalpost shifting on what AGI is—it used to be the Turing test, until that was passed. Then a bunch of other criteria that were developed were passed and and now the definition of ‘AGI’ now seems to default to what previously what have been called ‘strong AI’.
GPT-4 seems to be able to solve problems it wasn’t trained on, reason and argue as well as many professionals, and we are just getting started to learn it’s capabilities.
Of course, it also isn’t a conscious entity—it’s style of intelligence is strange and foreign to us! Does this mean that goalposts will continue to shift as long as any humans intelligence is different in any way from the artificial version?
Wow, this is much higher support than I would have ever imagined for the topic. I guess Terminator is pretty convincing as a documentary!
Great post! It is so easy to get focused on the bad that we forget to look towards the path towards the good, and I want to see more of this kind of thinking.
One little note about AGI:
”cars have not been able to drive autonomously in big cities ”...
I think that autonomous car driving is a very bad metric for AGI because humans are -hyper specialized- at the traits that allow it—any organism with hyperspecialized traits shouldn’t be expected to be easily reached by a ‘general’ intelligence without specialized training!
In order to drive a car, you need to:
1. Understand complex visual information as you are moving through a very complex environment, in wildly varying conditions, and respond almost instantly to changes to keep safe
2. Know the right path to move an object through a complex environment to avoid dangers, infer the intentions of other objects based on their movement, and calculate this incredibly fast
3. Coordinate with other actors on the road in a way that allows harmonious, low-risk movement to meet a common objective
It turns out—these are all hard problems—and ones that Homo sapiens was evolutionary designed to do in order to survive as persistence hunters, working in a group, following prey through forests and savannah, and sharing the proceeds when the gazelle collapsed from exhaustion! Our brain’s circuits are designed for this task and it excels at them, and it does it in a way that we don’t even realize how hard driving is! (You know how you are completely exhausted after a long drive? it’s hard!)
It’s easy to not notice how hard something is when your unconscious is designed to do the hard work effortlessly :)
Best,
Kristopher
Really great point about a curious trend!
Human cultural evolution has replaced gene evolution of the main way humans are advancing themselves, and you certainly point at the trend that ties them together.
One reason I didn’t dig into the anthropology record is that it is so fragmented, and I am not an expert in it—very little cross-communication between the fields, excepting in a few sub-disciplines such as taphonomy.
This is a good proposal to have out there, but needs work on talking about the weaknesses. A couple examples:
How would this be enforced? Global carbon taxes are a good analogue and have never gotten global traction. Linked to the cooperation problem between different countries, the hardware can just go to an AWS server in a permissive country.
From a technical side, I can break down a large model into sub-components and then ensemble them together. It will be tough to have definitions that avoid these kind of work-around and also don’t affect legitimate use cases.- Apr 12, 2023, 2:42 PM; 4 points) 's comment on Data Taxation: A Proposal for Slowing Down AGI Progress by (
Hello all! Glad to find this wonderful community.
My name is Kristopher Purens, and I am technologist with a deep love for the planet we inhabit.
My career has focused on deploying AI and data science into large corporations to solve difficult problems, with a focus on Earth-science related including energy, agriculture, and mining. I have worked directly in large companies, including in oil exploration and CPG, as well as in the startup environment developing new ways to use earth-observation and geospatial data. I am currently scoping out next steps in my career. A friend @Open Philanthropy recommended I check out this community and I am glad to have done so.
My PhD was in the field of evolutionary paleobiology, where I developed new machine learning methods to measure organisms and their life history, understand patterns of origination and extinction, their fossil records, and identifying species. I am a breadth-focused scientist, who finds the similarities between different fields and learns how they can add value to each other.
Finding this community has been pretty magical, when I realized that other people have similar lines of thinking and prioritization as I have. When I was 25, I left a stable first career that had low opportunity of impact to pursue something that had an opportunity to make a difference. While working on my PhD, I realized that academia wasn’t going to provide the right path to make the kind of impact I wanted, and started looking for industry jobs, focused on leadership development. I initially worked at Shell in oil exploration—looking to find ways to deploy machine learning systems to help people have affordable energy. I subsequently worked at sourcing for a major CPG, before moving into a started focused on Earth-observation tools to solve problems. I recently finished my time there, and am looking for next steps.
My major focus right now is in technology to speed the discovery of critical metals used for batteries, as I believe this will be a major bottleneck in moving to a low-carbon future and avoiding the worst risks of climate change. I am currently testing my assumptions and verifying this makes sense for me.Please feel free to contact me on any area of mutual interest!
I made a little post to talk about some of my knowledge in paleontology that is relevant, can’t wait to contribute more!
kpurens’s Quick takes
Here is an intuitive, brief answer that should provide evidence that there is risk:
In the history of life before humans, there have been 5 documented mass extinctions. Humans—the first generally intelligent agent to evolve on our planet—are not causing the 6th mass extinction.
An intelligent agent that is superior to humans, clearly has potential to be another mass extinction agent—and if it turns out humans are in conflict with that agent, the risks are real.
So it makes sense to understand that risk—and, today, we don’t, even though development of these agents is barrowing forward at an incredible pace.
https://en.wikipedia.org/wiki/Holocene_extinction
https://www.cambridge.org/core/journals/oryx/article/briefly/03807C841A690A77457EECA4028A0FF9
Great question! If AI kills us in the next few years, it seems likely it would be by using a pathway to power that is currently accessible by humans; and just acts as a helper/accelerator to human actions.
The top two existential risks that meet that criteria are engineered bioweapon and nuclear exchange.
Currently, there is a great deal of research into how LLMs can assist research in a broad set of fields, with good results performing similar to a human specialist and in creativity tasks to identify new possibilities. Nothing that human researches can’t do, but the speed and low cost of these models is already surprising and likely to accelerate many fields.
For bioweapon risk, the risk I see would be direct development where the AI is an assistant to the human-led efforts. The specific bioengineering skills to create a an AI designed bug are scarce, but the equipment isn’t.
How could an AI accelerate nuclear risk? One path I could see is again AI-assisted, human led, except for controlling social media content and attitude to increase global tensions. This seems less likely than the bioweapon option.
What others are there?