OP here. After spending some more time with ChatGPT, I admit my appreciation for this field (AI Alignment) has increased a bit.
Lixiang
Bill Burr on Boiling Lobsters (also manliness and AW)
Possibly relevant: the academic papers of Bard Harstad on game theory of international treaties (esp. in climate change policy).
https://www.sv.uio.no/econ/personer/vit/bardh/dokumenter/participation.pdf
https://www.sv.uio.no/econ/personer/vit/bardh/dokumenter/cmp.pdf
https://www.sv.uio.no/econ/personer/vit/bardh/dokumenter/iea.pdf
https://www.sv.uio.no/econ/personer/vit/bardh/dokumenter/prb.pdf
- [deleted]
New Data Science Resource: Hansen Econometrics Books
Interesting, well maybe I’m off base then.
I’d guess that does still hold after adjusting, but I did take it out.
One thing is that I’d guess working class, rural people are more likely to work in some area at least adjacent to the meat/fish/food industry, and so the vegetarian movement would go against their livelihood, which might make them more likely to oppose it. To be clear, I’m not blaming those people. I think the city-dwelling meat eater who deliberately shields themselves from the unpleasant sight of the process that makes their food is much more troublesome.
Also, working class areas just don’t have vegan food available as much.
I’m sure many farmers do care about their animals.
Math or CS keeps your options open. Can you do a joint or double major like math/bio or CS/bio?
Is this community over-emphasizing AI alignment?
I have opposite intuition actually—I’d guess that people closer to animals have more empathy for their suffering.
I also have that intuition.
Thanks for the ref!
CS=Computer Science. The key thing is that once you learn a bunch of math (and some coding skills), you will be able to pick up other fields way more easily than vice-versa.
CS would have some classes in software engineering and data science and would also give skills.
Interesting post.
I have three points/thoughts in response:
1) Could it be useful to distinguish between “causal uncertainty” and “non-causal uncertainty” about who (and how many) will exist?
Causal uncertainty would be uncertainty resulting from the fact that you as a decision maker have not yet decided what to do, yet where your actions will impact who will exist—a strange concept to wrap my around. Non-causal uncertainty would be uncertainty (about who will exist) that stems from uncertainty about how other forces will play out that are largely independent of your actions.
Getting to your post, I can see why one might discount based on non-causal uncertainty (see next point for more on this), but discounting based on causal uncertainty seems rather more bizarre and almost makes my head explode (though see this paper).
2) You claim in your first sentence that discounting based on space and time should be treated similarly to each other, and in particular that discounting based on either should be avoided. Thus it appears you claim that absent uncertainty, we should treat the present and future similarly; [if that last part didn’t quite follow see point 3 below]. If so, one can ask should we also treat uncertainty about who will eventually come into existence similarly to how we treat uncertainty about who currently exists? For an example of the latter, suppose there are an uncertain number of people trapped in a well: either 2 or 10 with 50-50 odds and we can take costly actions to save them. I think we would weight the possible 10 people with only 50% (and similarly the possible 2 people), so in that sense I think we would and should discount on uncertainty about who currently exists. If so and if we answer yes to the third sentence in this paragraph, we should also discount future people based on non-causal uncertainty.
3) Another possibility is to discount not based on time per se (which you reject) but rather on current existence, so that future people will be discounting or ignored until they exist at which point they get full value. A potential difficulty with this approach is that you could be sure that 1 billion people are going to be born in a barren desert next year and you would, then, have no (or at discounted) reason to bring food to that desert until they were born, at which point you would suddenly have a humanitarian crisis on your hands which you quite foreseeably failed to prepare for. [Admittedly people come into existence through a gradual process (e.g. 9 months) so it wouldn’t be quite a split-second change of priorities about whether to bring food, which might attenuate the force of this objection a bit.]
Can non-consequentialists try to posit metrics of the directness of causation? Then the doing-allowing asymmetry (or loss aversion) would be weighted according to how directly the harm was caused. [A lot of details to fill in there.]
This would need to be extended to account for uncertainty, using a mixture of notions of both ex-ante and ex-post harm. Perhaps some ideas from Causal Decision Theory and modal logic could be used here. Certainly, such an account won’t be well-fleshed-out anytime soon, but may be at least a vaguely coherent framework.
FWIW, I’ll note that the doing-allowing distinction is only one of a loose family of related distinctions important to deontology/non-consequentialism. For example, there is also the doctrine of double effect about intending vs. forseeing. Phillipa Foote also had a nice example that shows that one can harm people by abstaining from doing something: failing to show up for a theatrical performance in which you are an actor. Further, a key commitment of non-consequentialist ideology is belief in the importance of intuition (in particular cases but also at higher levels of abstraction), so these principles and distinctions don’t need to be fully fleshed out, though the more fleshed out the better.
Finally, I suspect the following sentence is a typo in the post:
“They argue that non-consequentialists must either join consequentialists in improving the long-run future, alter some core aspects of their understanding of morality… or they must die.”
Shouldn’t it be “altering” not “alter”? As is, it suggests a trilemma, rather than a dilemma.
I’m definitely not knowledgeable about AI, but my two cents is that there is a thing called the frame problem that makes AGI very hard to attain or even think about. I’m not gonna even try to exposit what that is, and that article is a bit dated, but I’d guess the problem still remains beyond anyone’s comprehension.
Tangential:
I think the whole issue of “one person’s modus ponens is another’s person’s modus tolens” is not very well understood by most people and including most philosophers and myself. In fact, I’m don’t think anyone knows quite how to think about these things. I guess it gets into Quinean holism and the intractability problems that accompany it.
But, presumably, it has something to do with Bayesian networks of beliefs and regularization in machine learning (~valuing simplicity) as well as Bayesian philosophy of science more generally. [Part IV of Itzhak Gilboa’s decision theory book gets into some of this stuff, which seemed pretty interesting.]
I don’t understand why much more attention is not paid to these things in philosophy, where formal epistemology seems to still be considered a pretty niche field.
I hope people think more about these issues.