Formalism
Coherentist theory of epistemic justification
Correspondence theory of truth
Consequentialism
Formalism
Coherentist theory of epistemic justification
Correspondence theory of truth
Consequentialism
“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”
I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.
you just solve for the policy … that maximizes your objective function, whatever that may be.
I don’t think that’s right. I’ve written about what it means for a system to do “the optimal thing” and the answer cannot be that a single policy maximizes your objective function:
Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...
Unless by policy, you mean “the entirety of what government does”, then yes. But given that you’re going to consider one area at a time, and you’re “only including all the levers between which you’re considering”, you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is “How would a system for prisons (for example) be in the best possible future?” This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you’re considering (though they often are). Rather than think about a system maximizing your objective function, it’s better to think of systems as satisfying goals that are aligned with your objective function.
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I’d usually pay a lot more to know what X is than what Y is.
As you should, but Greg is still correct in saying that Y should be provided.
Regarding the bits of information, I think he’s wrong because I’d assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you’d throw away 25%, etc.)
But again, there’s no point in throwing away that 10%.
It happens in philosophy sometimes too: “Saving your wife over 10 strangers is morally required because...” Can’t we just say that we aren’t moral angels? It’s not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it’s not the best moral thing to do. You can value non-moral things.
My instinctive emotional reaction to this post is that it worries me, because it feels a bit like “purchasing a person”, or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line
No one is going to run a prison for free—there has to be some money exchanged (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences. I think a worthy goal is maximizing the societal contribution of any given set of inmates without restricting their freedom after release. This goal is achieved by the system I proposed (a claim supported by my argument in the post). Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn’t help maximize societal contribution. “Commodification” and “dehumanization” don’t mean anything unless you can point to their concrete effects. If I’ve missed some avoidable concrete effect, I will concede it as a good criticism.
(indeed, parts of your analysis explicitly ignore non-monetary aspects of people’s interactions with society and the state; as far as I can tell, all of it ignores the benefits to the inmate of different treatment by different prisons).
Not every desirable thing needs to be explicitly stated in the goal of the system: Good consequences can be implied. As I mentioned, inmates will probably be treated much better under my system. Another good implicit consequence of satisfying stated goal, is that prisons will only pursue a rehabilitative measure if and if it is in the interests of society (again, you wouldn’t want to prevent the theft of a candy bar for a million dollars).
I account for the nonmonetary aspects of the crimes. But yes, the rest is ignored. If this ignored amount correlates with the measured factors, this is not really an issue.
I’m thinking that you might be able to bet against experienced bettors who think that you’re the victim of confirmation bias (which you might be)
I’d say I’m neutral (though so would anyone who has confirmation bias). I’ve given reasons why these indicators may have lost their predictive value. My main concern is increased savings (and investment of those savings). But hey, we don’t get better at prediction until we actually make predictions.
I’m just looking for market odds. I’d prefer to read the other side that you mention before I size my bets, but I listened to Chairman Powell’s reasoning every time he gives it. Watching Bloomberg’s videos. Listened to Buffett explain why he’s still holding stocks (low interest rates). Let me know if I should be listening to something else. I’m very happy to read the other side if someone is giving their credences.
I’m not quite sure about my bet structure. I’ve got my probability distribution, and I want to bet it against the market’s implied probability distribution in such a way that I’m maximizing long-run growth. Not sure how to do that other than run simulations. If there’s a formula, please let me know.
That’s a fair question. Culture is extremely important (e.g. certain cultural norms facilitate corruption and cronyism, which leads to slower annual increases in quality of life indices), but whether cancelling, specifically, is a big problem, I’m not sure.
Government demonstrably changes culture. At a minor level, drink-driving laws and advertising campaigns have changed something that was a cultural norm into a serious crime. At a broader level, you have things like communist governments making religion illegal and creating a culture where everyone snitches on everyone else to the police.
If we can influence government policy, which I think we can, we can influence culture. It’s probably much easier when most people aren’t questioning a norm (drink-driving, again, being a good example), but I think you’re right in this case: Since cancelling is fairly common to talk about, it’s probably much harder to change the general discourse (and the laws).
Well now I’m definitely glad I wrote “is not a new idea”. I didn’t know so many people had discussed similar proposals. Thank you all for the reading material. It’ll be interesting to hear some downsides to funding retrospectively.
I mentioned the Future of Life Institute which, for those who haven’t checked it out yet, does the “Future of Life” award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven’t listened to in a while but, when I was listening, they had some really interesting discussions.
I think you’re conflating moral value with value in general. People value their pets, but this has nothing to do with the pet’s instrumental moral value.
So a relevant question is “Are you allowed to trade off moral value for non-moral value?” To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There’s no “demandingness”. I don’t buy into the notions of “morally permissible” or “morally required”: These lines in the sand seem like sociological observations (e.g. whether people are morally repulsed by certain actions in the current time and place) rather than normative truths.
I do think having more focus on moral value is beneficial, not just because it’s moral, but because it endures. If you help a lot of people, that’s something you’ll value until you die. Whereas if I put a bunch of my time into playing chess, maybe I’ll consider that to be a waste of time at some point in the future. There’s other things, like enjoying relationships with your family, that also aren’t anywhere close to the most moral thing you could be doing, but you’ll probably continue to value.
You’re allowed to value things that aren’t about serving the world.
Thank you for saying this. It’s frustrating to have people who agree with you bat for the other team. I’d like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your “I’m not going to talk about political feasibility in this post” idea is a good one that I’ll use in future.
Poor meta-arguments I’ve noticed on the Forum:
Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because “I probably got 100 points, because that’s the average.”)
Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. “Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn’t mean it is”. A much better contribution is to actually point out the weaknesses in the specific argument that’s in front of you.)
And, as you say, predictions of infeasibility.