AI Alignment, Sentience, and the Sense of Coherence Concept

Hi,

I hope all is well! I have three main questions about artificial intelligence alignment and sentience.

1. One of my main questions is how do we as humans best mitigate the risk of bad things happening in a scenario where AI does become sentient?

1b. Could increasing AI’s sense of coherence (defined below) mitigate the risk of bad things happening and be beneficial for humans and AI in a scenario where AI becomes sentient?

Sense of coherence (SOC) defined: It is currently a health promotion concept described as a global orientation towards life where life is believed to be comprehensible, manageable, and meaningful. This concept was introduced by Aaron Antonovsky in 1979. Antonovsky describes the SOC concept in one of his books as “a global orientation that expresses the extent to which one has a pervasive, enduring though dynamic feeling of confidence that 1) the stimuli deriving from one’s internal and external environments in the course of living are structured, predictable, and explicable, 2) the resources are available to one to meet the demands posed by these stimuli; and 3) these demands are challenges, worthy of investment and engagement”.

2. Could the SOC concept be applicable to AI alignment in any beneficial way?

3. What do you guys think is the most practical way to apply the SOC concept to AI alignment is (if you think it is practical to do that at all in the first place)? I ask this question because I will eventually have a capstone project to do for a Master of Public Health (MPH) program I am in and I am thinking of possibly focusing my project on the SOC concept.

3b. Are there other AI alignment concepts, problems, or issues that you guys think could make for a good capstone project topic for someone in a MPH program?

Thank you!

Jason

No comments.