I’m a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I’ve have a chronic lack of interest in making money; instead I’ve developed an unhealthy interest in foundational software that free markets don’t build because their effects would consist almost entirely of positive externalities.
I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.