Strong upvoted because this is indeed an approach I’m investigating in my work and personal capacity.
For other software fields/subfields, upskilling can be done fairly rapidly, by grinding knowledge bases with high feedback loops. It is possible to be as good as a professional software engineer quickly, independently and in a short timeframe.
If AI Safety wants to develop its talent pool to keep up with the AI Capabilities talent pool (which is probably growing much faster than average), researchers-especially juniors- need an easy way to learn quickly and conveniently. I think existing researchers may underrate this, since they’re busy putting out their own fires and finding their own resources.
Ironically, it has not been quick and convenient for me to develop this idea to a level where I’d work on it, so thanks for this.
I’m ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I’ve not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker’s rights issues. The only scenarios that get attention are the inexplicable “kill all humans” scenarios, but not the more obvious “the humans really mistreat us” scenarios. That is a big blindspot in AGI Safety.
I was speculating about how the research community could build a graph database of AI Safety information alongside a document database containing research articles, CC forum posts and comments, other CC material from the web, fair use material, and multimedia material. I suspect that the core AI Safety material is not that large and far far less than AI Capabilities material. The graph database could provide more granular representation of data and metadata and so a richer representation of the core material but that’s an aside.
A quick experiment would be to represent a single AGI Safety article in a document database, add some standard metadata and linking, and then go further.
Here’s how I’d do it:
take an article.
capture article metadata (author, date, abstract, citations, the typical stuff)
establish glossary word choices.
link glossary words to outside content.
use text-processing to create an article summary. Hand-tune if necessary.
use text-processing to create a concise article rewrite. Hand-tune if necessary.
Translate the rewrite into a knowledge representation language.
begin with Controlled English.
develop an AGI Safety controlled vocabulary.
NOTE: as articles are included in the process, the controlled vocabulary can grow. Terms will need specific definition. Synonyms of controlled vocabulary words will need identification.
combine the controlled vocabulary and the glossary. TIP: As the controlled vocabulary grows, hyperonym-hyponym relationships can be established.
Once you have articles in a controlled english vocabulary, most of the heavy lifting is done. It will be easier to query, contrast, and combine their contents in various ways.
Some article databases online already offer useful tools for browsing work, but leave it to the researcher to answer questions requiring meaning interpretation of article contents. That could change.
If you could get library scientists involved and some money behind that project, it could generate an educational resource fairly quickly. My vision does go further than educating junior researchers, but that would require much more investment, a well-defined goal, and the participation of experts in the field.
I wonder whether AI Safety is well-developed enough to establish that its purpose is tractable. So far, I have not seen much more than:
expect AGI soon
AGI are dangerous
AGI are untrustworthy
Current AI tools pose no real danger (maybe)
AGI could revolutionize everything
We should or will make AGI
The models do provide evidence of existential danger, but not evidence of how to control it. There’s a downside to automation: technological unemployment; concentration of money and political power (typically); societal disruption; increased poverty. And as I mentioned, AGI are not understood in the obvious context of exploited labor. That’s a worrisome condition that, again, the AGI Safety field is clearly not ready to address. Financiallly unattractive as it is, that is a vision of the future of AGI Safety research, a group of researchers who understand when robots and disembodied AGI have developed sentience and deserve rights.
Strong upvoted because this is indeed an approach I’m investigating in my work and personal capacity.
For other software fields/subfields, upskilling can be done fairly rapidly, by grinding knowledge bases with high feedback loops. It is possible to be as good as a professional software engineer quickly, independently and in a short timeframe.
If AI Safety wants to develop its talent pool to keep up with the AI Capabilities talent pool (which is probably growing much faster than average), researchers-especially juniors- need an easy way to learn quickly and conveniently. I think existing researchers may underrate this, since they’re busy putting out their own fires and finding their own resources.
Ironically, it has not been quick and convenient for me to develop this idea to a level where I’d work on it, so thanks for this.
Sure. I’m curious how you will proceed.
I’m ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I’ve not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker’s rights issues. The only scenarios that get attention are the inexplicable “kill all humans” scenarios, but not the more obvious “the humans really mistreat us” scenarios. That is a big blindspot in AGI Safety.
I was speculating about how the research community could build a graph database of AI Safety information alongside a document database containing research articles, CC forum posts and comments, other CC material from the web, fair use material, and multimedia material. I suspect that the core AI Safety material is not that large and far far less than AI Capabilities material. The graph database could provide more granular representation of data and metadata and so a richer representation of the core material but that’s an aside.
A quick experiment would be to represent a single AGI Safety article in a document database, add some standard metadata and linking, and then go further.
Here’s how I’d do it:
take an article.
capture article metadata (author, date, abstract, citations, the typical stuff)
establish glossary word choices.
link glossary words to outside content.
use text-processing to create an article summary. Hand-tune if necessary.
use text-processing to create a concise article rewrite. Hand-tune if necessary.
Translate the rewrite into a knowledge representation language.
begin with Controlled English.
develop an AGI Safety controlled vocabulary. NOTE: as articles are included in the process, the controlled vocabulary can grow. Terms will need specific definition. Synonyms of controlled vocabulary words will need identification.
combine the controlled vocabulary and the glossary. TIP: As the controlled vocabulary grows, hyperonym-hyponym relationships can be established.
Once you have articles in a controlled english vocabulary, most of the heavy lifting is done. It will be easier to query, contrast, and combine their contents in various ways.
Some article databases online already offer useful tools for browsing work, but leave it to the researcher to answer questions requiring meaning interpretation of article contents. That could change.
If you could get library scientists involved and some money behind that project, it could generate an educational resource fairly quickly. My vision does go further than educating junior researchers, but that would require much more investment, a well-defined goal, and the participation of experts in the field.
I wonder whether AI Safety is well-developed enough to establish that its purpose is tractable. So far, I have not seen much more than:
expect AGI soon
AGI are dangerous
AGI are untrustworthy
Current AI tools pose no real danger (maybe)
AGI could revolutionize everything
We should or will make AGI
The models do provide evidence of existential danger, but not evidence of how to control it. There’s a downside to automation: technological unemployment; concentration of money and political power (typically); societal disruption; increased poverty. And as I mentioned, AGI are not understood in the obvious context of exploited labor. That’s a worrisome condition that, again, the AGI Safety field is clearly not ready to address. Financiallly unattractive as it is, that is a vision of the future of AGI Safety research, a group of researchers who understand when robots and disembodied AGI have developed sentience and deserve rights.