Knowing differs from saying: uttering words by rote is different from knowing a fact, because knowledge of a fact generalizes across contexts. In this project, we show that factual knowledge within GPT also corresponds to a localized computation that can be directly edited. For example, we can make a small change to a small set of the weights of GPT-J to teach it the counterfactual “Eiffel Tower is located in the city of Rome.” Rather than merely regurgitating the new sentence, it will generalize that specific counterfactual knowledge and apply it in very different linguistic contexts.
Related to open relation extraction, you might find the ROME paper interesting: Locating and Editing Factual Associations in GPT:
Thanks I will check it.