But following this analogy, I’m offering a potential equation for electricity, but you’re saying that electricity doesn’t have an equation
I haven’t read your main article (sorry!), so I may not be able to engage deeply here. If we’re trying to model brain functioning, then there’s not really any disagreement about what success looks like. Different neuroscientists will use different methods, some more biological, some more algorithmic, and some more mathematical. Insofar as your work is a form of neuroscience, perhaps from a different paradigm, that’s cool. But I think we disagree more fundamentally in some way.
if you argue that something is bad and we should work to reduce it, but also say there’s no correct definition for it and no wrong definition for it, what are you really saying?
My point is that your objection is not an obstacle to practical implementation of my program, given that, e.g., anti-pornography activism exists.
If you want a more precise specification, you could define suffering as “whatever Brian says is suffering”. See “Brian utilitarianism”.
we could claim that “protons oppress electrons” or “there’s injustice in fundamental physics” — but this is obviously nonsense
It’s not nonsense. :) If I cared about justice as my fundamental goal, I would wonder how far to extend it to simpler cases. I discuss an example with scheduling algorithms here. (Search for “justice” in that interview.)
we can apply this linguistic construct of “suffering” to arbitrary contexts without it losing meaning
We do lose much of the meaning when applying that concept to fundamental physics. The question is whether there’s enough of the concept left over that our moral sympathies are still (ever so slightly) engaged.
That definition doesn’t seem to leave much room for ethical behavior
In my interpretation, altruism is part of “psychological advantage”, e.g., helping others because you want to and because it makes you feel better to do so.
I assume you don’t think it’s 100% arbitrary whether we say something is suffering or not
I do think it’s 100% arbitrary, depending how you define “arbitrary”. But of course I deeply want people to care about reducing suffering. There’s no contradiction here.
in accordance with new developments in the foundational physics, but we’re unlikely to chuck quantum field theory in favor of some idiosyncratic theory of crystal chakras. If we discover the universe’s equation for valence, we’re unlikely to find our definition of suffering at the mercy of intellectual fads.
Quantum field theory is instrumentally useful for any superintelligent agent. Preventing negative valence is not. Even if the knowledge of what valence is remains, caring about it may disappear.
But I think that, unambiguously, cats being lit on fire is an objectively bad thing.
I don’t know what “objectively bad” means.
slide into a highly Darwinian/Malthusian/Molochian context, then I fear that could be the end of value.
I’m glad we roughly agree on this factual prediction, even if we interpret “value” differently.
I haven’t read your main article (sorry!), so I may not be able to engage deeply here. If we’re trying to model brain functioning, then there’s not really any disagreement about what success looks like. Different neuroscientists will use different methods, some more biological, some more algorithmic, and some more mathematical. Insofar as your work is a form of neuroscience, perhaps from a different paradigm, that’s cool. But I think we disagree more fundamentally in some way.
My point is that your objection is not an obstacle to practical implementation of my program, given that, e.g., anti-pornography activism exists.
If you want a more precise specification, you could define suffering as “whatever Brian says is suffering”. See “Brian utilitarianism”.
It’s not nonsense. :) If I cared about justice as my fundamental goal, I would wonder how far to extend it to simpler cases. I discuss an example with scheduling algorithms here. (Search for “justice” in that interview.)
We do lose much of the meaning when applying that concept to fundamental physics. The question is whether there’s enough of the concept left over that our moral sympathies are still (ever so slightly) engaged.
In my interpretation, altruism is part of “psychological advantage”, e.g., helping others because you want to and because it makes you feel better to do so.
I do think it’s 100% arbitrary, depending how you define “arbitrary”. But of course I deeply want people to care about reducing suffering. There’s no contradiction here.
Quantum field theory is instrumentally useful for any superintelligent agent. Preventing negative valence is not. Even if the knowledge of what valence is remains, caring about it may disappear.
I don’t know what “objectively bad” means.
I’m glad we roughly agree on this factual prediction, even if we interpret “value” differently.