Your comment reads strangely to me because your thoughts seem to fall into a completely different groove from mine. The problem statement is perhaps: write a program that does what-I-want, indefinitely. Of course, this could involve a great deal of extrapolation.
The fact that I am even aspiring to write such a program means that I am assuming that what-I-want can be computed. Presumably, at least some portion of the relevant computation, the one that I am currently denoting ‘what-I-want’, takes place in my brain. If I want to perform this computation in an AI, then it would probably help to at least be able to reproduce whatever portion of it takes place in my brain. People who study the mind and brain happen to call themselves psychologists and cognitive scientists. It’s weird to me that you’re arguing about how to classify Joshua Greene’s research; I don’t see why it matters whether we call it philosophy or psychology. I generally find it suspicious when anyone makes a claim of the form: “Only the academic discipline that I hold in high esteem has tools that will work in this domain.” But I won’t squabble over words if you think you’re drawing important boundaries; what do you mean when you write ‘philosophical’? Maybe you’re saying that Greene, despite his efforts to inquire with psychological tools, elides into ‘philosophy’ anyway, so like, what’s the point of pretending it’s ‘moral philosophy’ via psychology? If that’s your objection, that he ‘just ends up doing philosophy anyway’, then what exactly is he eliding into, without using the words ‘philosophy’ or ‘philosophical’?
More generally, why is it that we should discard the approach because it hasn’t made itself obsolete yet? Should the philosophers give up because they haven’t made their approach obsolete yet either? If there’s any reason that we should have more confidence in the ability of philosophers than cognitive scientists to contribute towards a formal specification of what-I-want, that reason is certainly not track record.
What people believe doesn’t tell us much about what actually is good.
I don’t think anyone who has read or who likely will read your comment equivocates testimony or social consensus with what-is-good.
The challenge of AI safety is the challenge of making AI that actually does what is right, not AI that does whatever it’s told to do by a corrupt government, a racist constituency, and so on.
It’s my impression that AI safety researchers are far more concerned about unaligned AGIs killing everyone than they are about AGIs that are successfully designed by bad actors to do a specific, unimaginative thing without killing themselves and everyone else in the process.
Of course a new wave of pop-philosophers and internet bloggers have made silly claims that moral philosophy can be completely solved by psychology and neuroscience but this extreme view is ridiculous on its face.
Bleck, please don’t ever give me a justification to link a Wikipedia article literally named pooh-pooh.
The problem statement is perhaps: write a program that does what-I-want, indefinitely
No, the problem statement is write a program that does what is right.
It’s weird to me that you’re arguing about how to classify Joshua Greene’s research; I don’t see why it matters whether we call it philosophy or psychology
Then you missed the point of what I said, since I wasn’t talking about what to call it, I was talking about the tools and methods it uses. The question is what people ought to be studying and learning.
I generally find it suspicious when anyone makes a claim of the form: “Only the academic discipline that I hold in high esteem has tools that will work in this domain.”
If you want to solve a philosophical problem then you’re going to have to do philosophy. Psychology is for solving psychological problems. It’s pretty straightforward.
what do you mean when you write ‘philosophical’?
I mean the kind of work that is done in philosophy departments, and which would be studied by someone who was told “go learn about moral philosophy”.
Maybe you’re saying that Greene, despite his efforts to inquire with psychological tools, elides into ‘philosophy’ anyway
Yes, that’s true by his own admission (he affirms in his reply to Berker that the specific cognitive model he uses is peripheral to the main normative argument) and is apparent if you look at his work.
If that’s your objection, that he ‘just ends up doing philosophy anyway’, then what exactly is he eliding into, without using the words ‘philosophy’ or ‘philosophical’?
He’s eliding into normative arguments about morality, rather than merely describing psychological or cognitive processes.
More generally, why is it that we should discard the approach because it hasn’t made itself obsolete yet?
I don’t know what you are talking about, since I said nothing about obsolescence.
I don’t think anyone who has read or who likely will read your comment equivocates testimony or social consensus with what-is-good.
Great! Then they’ll acknowledge that studying testimony and social consensus is not studying what is good.
It’s my impression that AI safety researchers are far more concerned about unaligned AGIs killing everyone than they are about AGIs that are successfully designed by bad actors to do a specific, unimaginative thing without killing themselves and everyone else in the process.
Rather than bad actors needing to be restrained by good actors, which is neither a psychological nor a philosophical problem, the problem is that the very best actors are flawed and will produce flawed machines if they don’t do things correctly.
please don’t ever give me a justification to link a Wikipedia article literally named pooh-pooh.
Would you like to me to explicitly explain why the new wave of pop-philosophers and internet bloggers who think that moral philosophy can be completely solved by psychology and neuroscience don’t know what they’re talking about? It’s not taken seriously; I didn’t go into detail because I was unsure if anyone around here took it seriously.
Your comment reads strangely to me because your thoughts seem to fall into a completely different groove from mine. The problem statement is perhaps: write a program that does what-I-want, indefinitely. Of course, this could involve a great deal of extrapolation.
The fact that I am even aspiring to write such a program means that I am assuming that what-I-want can be computed. Presumably, at least some portion of the relevant computation, the one that I am currently denoting ‘what-I-want’, takes place in my brain. If I want to perform this computation in an AI, then it would probably help to at least be able to reproduce whatever portion of it takes place in my brain. People who study the mind and brain happen to call themselves psychologists and cognitive scientists. It’s weird to me that you’re arguing about how to classify Joshua Greene’s research; I don’t see why it matters whether we call it philosophy or psychology. I generally find it suspicious when anyone makes a claim of the form: “Only the academic discipline that I hold in high esteem has tools that will work in this domain.” But I won’t squabble over words if you think you’re drawing important boundaries; what do you mean when you write ‘philosophical’? Maybe you’re saying that Greene, despite his efforts to inquire with psychological tools, elides into ‘philosophy’ anyway, so like, what’s the point of pretending it’s ‘moral philosophy’ via psychology? If that’s your objection, that he ‘just ends up doing philosophy anyway’, then what exactly is he eliding into, without using the words ‘philosophy’ or ‘philosophical’?
More generally, why is it that we should discard the approach because it hasn’t made itself obsolete yet? Should the philosophers give up because they haven’t made their approach obsolete yet either? If there’s any reason that we should have more confidence in the ability of philosophers than cognitive scientists to contribute towards a formal specification of what-I-want, that reason is certainly not track record.
I don’t think anyone who has read or who likely will read your comment equivocates testimony or social consensus with what-is-good.
It’s my impression that AI safety researchers are far more concerned about unaligned AGIs killing everyone than they are about AGIs that are successfully designed by bad actors to do a specific, unimaginative thing without killing themselves and everyone else in the process.
Bleck, please don’t ever give me a justification to link a Wikipedia article literally named pooh-pooh.
No, the problem statement is write a program that does what is right.
Then you missed the point of what I said, since I wasn’t talking about what to call it, I was talking about the tools and methods it uses. The question is what people ought to be studying and learning.
If you want to solve a philosophical problem then you’re going to have to do philosophy. Psychology is for solving psychological problems. It’s pretty straightforward.
I mean the kind of work that is done in philosophy departments, and which would be studied by someone who was told “go learn about moral philosophy”.
Yes, that’s true by his own admission (he affirms in his reply to Berker that the specific cognitive model he uses is peripheral to the main normative argument) and is apparent if you look at his work.
He’s eliding into normative arguments about morality, rather than merely describing psychological or cognitive processes.
I don’t know what you are talking about, since I said nothing about obsolescence.
Great! Then they’ll acknowledge that studying testimony and social consensus is not studying what is good.
Rather than bad actors needing to be restrained by good actors, which is neither a psychological nor a philosophical problem, the problem is that the very best actors are flawed and will produce flawed machines if they don’t do things correctly.
Would you like to me to explicitly explain why the new wave of pop-philosophers and internet bloggers who think that moral philosophy can be completely solved by psychology and neuroscience don’t know what they’re talking about? It’s not taken seriously; I didn’t go into detail because I was unsure if anyone around here took it seriously.