My work (for a startup called Kebotix) aims to use and refine existing ML methods to accelerate scientific and technological progress, focused specifically on discovery of new chemicals and materials.
Most descriptions of TAI in AIS pitches route through essentially the same approach, claiming that smarter AI will be dramatically more successful than our current efforts, bringing about rapid economic growth and societal transformation, usually en route to claiming that the incentives will be astronomical to deploy quickly and unsafely.
However, this step often gets very little detailed attention in that story. Little thought is given to explicating how that would actually work in practice, and, crucially, whether intelligence is even the limiting factor in scientific and technological progress. My personal, limited experience is that better algorithms are rarely the bottleneck.
whether intelligence is even the limiting factor in scientific and technological progress.
My personal, limited experience is that better algorithms are rarely the bottleneck.
Yeah, in some sense everything else you said might be true or correct.
But I suspect by “better algorithms”, I think you thinking along the lines of “What’s going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?”, “More layers to reduce false negatives has huge diminishing returns, we need better coverage and identification in the data.” or “Yeah, this clustering algorithm sucks for parsing out material with this quality.”
Is the above right?
The above isn’t what the AI safety worldview sees as “intelligence”. In that worldview, the “AI” competency would basically start working up the org chart and taking over a lot of roles progressively: starting with the decisions in the paragraph above of model selection, then doing data cleaning, data selection over accessible datasets, calling and interfacing with external data providers, then understanding the relevant material science and how that might relate to the relevant “spaces” of the business model.
So this is the would-be “intelligence”. In theory, solving all those problems above seems like a formidable “algorithm”.
What I mean by “better algorithms” is indeed in the narrow sense of better processes of taking an existing data set and generating predictions. You could indeed also define “better algorithms” much more broadly to encompass everything that everyone in a company does from the laboratory chemist tweaking a faulty instrument to the business development team pondering an acquisition to the C-suite deciding how to navigate the macroeconomic environment. And in that sense, yes, better algorithms would always be the bottleneck, but that would also be a meaningless statement.
My work (for a startup called Kebotix) aims to use and refine existing ML methods to accelerate scientific and technological progress, focused specifically on discovery of new chemicals and materials.
Most descriptions of TAI in AIS pitches route through essentially the same approach, claiming that smarter AI will be dramatically more successful than our current efforts, bringing about rapid economic growth and societal transformation, usually en route to claiming that the incentives will be astronomical to deploy quickly and unsafely.
However, this step often gets very little detailed attention in that story. Little thought is given to explicating how that would actually work in practice, and, crucially, whether intelligence is even the limiting factor in scientific and technological progress. My personal, limited experience is that better algorithms are rarely the bottleneck.
Yeah, in some sense everything else you said might be true or correct.
But I suspect by “better algorithms”, I think you thinking along the lines of “What’s going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?”, “More layers to reduce false negatives has huge diminishing returns, we need better coverage and identification in the data.” or “Yeah, this clustering algorithm sucks for parsing out material with this quality.”
Is the above right?
The above isn’t what the AI safety worldview sees as “intelligence”. In that worldview, the “AI” competency would basically start working up the org chart and taking over a lot of roles progressively: starting with the decisions in the paragraph above of model selection, then doing data cleaning, data selection over accessible datasets, calling and interfacing with external data providers, then understanding the relevant material science and how that might relate to the relevant “spaces” of the business model.
So this is the would-be “intelligence”. In theory, solving all those problems above seems like a formidable “algorithm”.
What I mean by “better algorithms” is indeed in the narrow sense of better processes of taking an existing data set and generating predictions. You could indeed also define “better algorithms” much more broadly to encompass everything that everyone in a company does from the laboratory chemist tweaking a faulty instrument to the business development team pondering an acquisition to the C-suite deciding how to navigate the macroeconomic environment. And in that sense, yes, better algorithms would always be the bottleneck, but that would also be a meaningless statement.