This is so cool. I spent a lot of time analyzing AI, but mainly from the perspective of visual, tone of voice, and symbolical rather than objective content. Maybe my analysis of abstract advertisement can be of an inspiration to you. I see the greatest issue with AI persuasion, in particular one that the audience cannot rationalize since it is working with human intuition (prima facie, positive objectives are narrated—by advertisers and humans).
I only saw a few GPT-3 texts, but it seems to me that the model is optimizing for attention by making discussants ‘submit’ - normalizes aggression: expresses the unwillingness of the discussants to interact yet interacting (I commented a bit here). This is a suboptimal system—norms of free cooperation on important problem solving may seem better.
It may not be an issue of GPT-3, it may be the issue of prominent/majority internet content that is co-developed by humans and AI (humans intuitively learn what to produce by seeing metrics which relate to engagement). If GPT-3 reflects other content, it may behave differently (has anyone already tried training it on the EA Forum?)
How can one use a software to train language models in self-reflection (analyze and qualify the rationale of audience’s action or the viewers’ feelings—e. g. if they are acting based on fear vs. on somewhat reasoned conclusion regarding usefulness of the product to their true objectives) and fostering humans’ objectives which are also virtuous and inclusive (e. g. be healthy, improve one’s and others’ problems, be well-enjoyed in one’s environment, be informed about reasons for one’s actions, empathize with animals, learn unbiasing information, etc)? Is this already developed, just the questions/prompts are missing/limited?
I think that a software that can sell actually good products by motivating reasoning is somewhat unbeatable (consumers will decrease their demand for products less aligned with their objectives), so any company or economy that has a competitive advantage in this capacity (alongside with the ability to diversify and rapidly adjust production) benefits.
There is a risk of explaining AI in a way that omits the rationale for persons’ emotions or actions, since then AI can seem prima facie safe but actually lead to the loss of human agency without humans/regulators able to detect it or having the infrastructure to react to it.
This is so cool. I spent a lot of time analyzing AI, but mainly from the perspective of visual, tone of voice, and symbolical rather than objective content. Maybe my analysis of abstract advertisement can be of an inspiration to you. I see the greatest issue with AI persuasion, in particular one that the audience cannot rationalize since it is working with human intuition (prima facie, positive objectives are narrated—by advertisers and humans).
I only saw a few GPT-3 texts, but it seems to me that the model is optimizing for attention by making discussants ‘submit’ - normalizes aggression: expresses the unwillingness of the discussants to interact yet interacting (I commented a bit here). This is a suboptimal system—norms of free cooperation on important problem solving may seem better.
It may not be an issue of GPT-3, it may be the issue of prominent/majority internet content that is co-developed by humans and AI (humans intuitively learn what to produce by seeing metrics which relate to engagement). If GPT-3 reflects other content, it may behave differently (has anyone already tried training it on the EA Forum?)
How can one use a software to train language models in self-reflection (analyze and qualify the rationale of audience’s action or the viewers’ feelings—e. g. if they are acting based on fear vs. on somewhat reasoned conclusion regarding usefulness of the product to their true objectives) and fostering humans’ objectives which are also virtuous and inclusive (e. g. be healthy, improve one’s and others’ problems, be well-enjoyed in one’s environment, be informed about reasons for one’s actions, empathize with animals, learn unbiasing information, etc)? Is this already developed, just the questions/prompts are missing/limited?
I think that a software that can sell actually good products by motivating reasoning is somewhat unbeatable (consumers will decrease their demand for products less aligned with their objectives), so any company or economy that has a competitive advantage in this capacity (alongside with the ability to diversify and rapidly adjust production) benefits.
There is a risk of explaining AI in a way that omits the rationale for persons’ emotions or actions, since then AI can seem prima facie safe but actually lead to the loss of human agency without humans/regulators able to detect it or having the infrastructure to react to it.