On the question, “What does morally aligned ai look like?” Some quick thoughts.
It probably depends on what the AI system looks like in general.
If agentic, with coherent, long-term, large-scale goals, the goals should emphasise delivering wellbeing and alleviating suffering across all sentient beings.
If following an ecosystem of services model, and constitutional, the constitution should emphasise not excessively harming sentient beings and commit to evaluating effects of its services on the welfare of important sentient beings.
If following an ecosystem of services model and not constitutional, I can think of two possibilities.
A centralized distributor of services might have a branch focused on setting global priorities and galvanising work in priority areas such as moral alignment. Depending on whether the centre is public or private, this global priorities task force might be embedded in a company or a governance institution like the US government or the UN.
AI systems themselves are trained across the board to take the welfare of all sentient beings into account, similar to current ethical AI efforts.
I’m curious if anyone’s thinking more about this carefully!
On the question, “What does morally aligned ai look like?” Some quick thoughts.
It probably depends on what the AI system looks like in general.
If agentic, with coherent, long-term, large-scale goals, the goals should emphasise delivering wellbeing and alleviating suffering across all sentient beings.
If following an ecosystem of services model, and constitutional, the constitution should emphasise not excessively harming sentient beings and commit to evaluating effects of its services on the welfare of important sentient beings.
If following an ecosystem of services model and not constitutional, I can think of two possibilities.
A centralized distributor of services might have a branch focused on setting global priorities and galvanising work in priority areas such as moral alignment. Depending on whether the centre is public or private, this global priorities task force might be embedded in a company or a governance institution like the US government or the UN.
AI systems themselves are trained across the board to take the welfare of all sentient beings into account, similar to current ethical AI efforts.
I’m curious if anyone’s thinking more about this carefully!