Iām really looking forward to the debate on this topic!
Some thoughts:
I like that debate topics arenāt overly operationalized. Allowing people to take slightly different interpretations means that people can focus on the variation which seem most important to them. This can come at the expense of understanding each other crisply and when interpreting the (quantified) agreement scale.
Iām not sure what were the main takeaways from previous debates, but I felt that I cared more about hearing interesting new takes and peopleās reactions to them than I cared about assessing the overall community opinion.
āBy defaultāāOne possible ambiguity here is whether this means with >50% probability or with >99.9% probability.
āThe world whereā ā āThe worlds whereā. Also, perhaps this notion of conceiving of possible futures as possible worlds is a bit too heavy on EA/ārationalist-lingo.
āAI goes well for humansāāI broadly like this. I would be interested in peopleās opinion for both neartermist and lontermist worldviews, and under maxipok or flourishing futures.
āSentient beingsāāHere I think the discussion should be contained to nonhuman animals because the other case seemed to be handled in the previous AI welfare debate.
I donāt think that the statement of the debate should be about āwhat we should doā but rather about the worldview directly. Itās a bit hard for me to pinpoint exactly why I think so and I may regret this.
I like that debate topics arenāt overly operationalized.
I agree, but there are better and worse ambiguities to spend our time discussing. For example āWhat is AGIā is a rabbit-hole, but ultimately not that interesting/ā action-relevant.
āSentient beingsāāHere I think the discussion should be contained to nonhuman animals because the other case seemed to be handled in the previous AI welfare debate.
Iām definitely leaning this way too.
I think that an operationalization which is too close to peopleās actual decisions may cause more people to defend their existing views or to take a stance based on whatās more salient.
Yes, my ideal would always be that someone discusses a crux, arrives at an answer, and only then realises that it should influence their cause prioritisation.
Iām really looking forward to the debate on this topic!
Some thoughts:
I like that debate topics arenāt overly operationalized. Allowing people to take slightly different interpretations means that people can focus on the variation which seem most important to them. This can come at the expense of understanding each other crisply and when interpreting the (quantified) agreement scale.
Iām not sure what were the main takeaways from previous debates, but I felt that I cared more about hearing interesting new takes and peopleās reactions to them than I cared about assessing the overall community opinion.
āBy defaultāāOne possible ambiguity here is whether this means with >50% probability or with >99.9% probability.
āThe world whereā ā āThe worlds whereā. Also, perhaps this notion of conceiving of possible futures as possible worlds is a bit too heavy on EA/ārationalist-lingo.
āAI goes well for humansāāI broadly like this. I would be interested in peopleās opinion for both neartermist and lontermist worldviews, and under maxipok or flourishing futures.
āSentient beingsāāHere I think the discussion should be contained to nonhuman animals because the other case seemed to be handled in the previous AI welfare debate.
I donāt think that the statement of the debate should be about āwhat we should doā but rather about the worldview directly. Itās a bit hard for me to pinpoint exactly why I think so and I may regret this.
I think that an operationalization which is too close to peopleās actual decisions may cause more people to defend their existing views or to take a stance based on whatās more salient. Iām not sure why exactly, but framings like āWithout extra animal-focused work, even aligned superintelligence would be bad for non-human animalsā feel like they would generate more ideologically-oriented responses.
This makes the question more complex with more moving parts.
I think that the framing of āAGI which doesnāt cause human extinction or disempowerment will value animal welfareā is quite good. Perhaps this should include CAIS or multipolar scenarios.
Thanks Edo!
I agree, but there are better and worse ambiguities to spend our time discussing. For example āWhat is AGIā is a rabbit-hole, but ultimately not that interesting/ā action-relevant.
Iām definitely leaning this way too.
Yes, my ideal would always be that someone discusses a crux, arrives at an answer, and only then realises that it should influence their cause prioritisation.