Formalism
Coherentist theory of epistemic justification
Correspondence theory of truth
Consequentialism
Formalism
Coherentist theory of epistemic justification
Correspondence theory of truth
Consequentialism
“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”
I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.
you just solve for the policy … that maximizes your objective function, whatever that may be.
I don’t think that’s right. I’ve written about what it means for a system to do “the optimal thing” and the answer cannot be that a single policy maximizes your objective function:
Societies need many distinct systems: a transport system, a school system, etc. These systems cannot be justified if they are amoral, so they must serve morality. Each system cannot, however, achieve the best moral outcome on its own: If your transport system doesn’t cure cancer, it probably isn’t doing everything you want; if it does cure cancer, it isn’t just a “transport” system...
Unless by policy, you mean “the entirety of what government does”, then yes. But given that you’re going to consider one area at a time, and you’re “only including all the levers between which you’re considering”, you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is “How would a system for prisons (for example) be in the best possible future?” This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you’re considering (though they often are). Rather than think about a system maximizing your objective function, it’s better to think of systems as satisfying goals that are aligned with your objective function.
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I’d usually pay a lot more to know what X is than what Y is.
As you should, but Greg is still correct in saying that Y should be provided.
Regarding the bits of information, I think he’s wrong because I’d assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you’d throw away 25%, etc.)
But again, there’s no point in throwing away that 10%.
It happens in philosophy sometimes too: “Saving your wife over 10 strangers is morally required because...” Can’t we just say that we aren’t moral angels? It’s not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it’s not the best moral thing to do. You can value non-moral things.
My instinctive emotional reaction to this post is that it worries me, because it feels a bit like “purchasing a person”, or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line
No one is going to run a prison for free—there has to be some money exchanged (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences. I think a worthy goal is maximizing the societal contribution of any given set of inmates without restricting their freedom after release. This goal is achieved by the system I proposed (a claim supported by my argument in the post). Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn’t help maximize societal contribution. “Commodification” and “dehumanization” don’t mean anything unless you can point to their concrete effects. If I’ve missed some avoidable concrete effect, I will concede it as a good criticism.
(indeed, parts of your analysis explicitly ignore non-monetary aspects of people’s interactions with society and the state; as far as I can tell, all of it ignores the benefits to the inmate of different treatment by different prisons).
Not every desirable thing needs to be explicitly stated in the goal of the system: Good consequences can be implied. As I mentioned, inmates will probably be treated much better under my system. Another good implicit consequence of satisfying stated goal, is that prisons will only pursue a rehabilitative measure if and if it is in the interests of society (again, you wouldn’t want to prevent the theft of a candy bar for a million dollars).
I account for the nonmonetary aspects of the crimes. But yes, the rest is ignored. If this ignored amount correlates with the measured factors, this is not really an issue.
I’m thinking that you might be able to bet against experienced bettors who think that you’re the victim of confirmation bias (which you might be)
I’d say I’m neutral (though so would anyone who has confirmation bias). I’ve given reasons why these indicators may have lost their predictive value. My main concern is increased savings (and investment of those savings). But hey, we don’t get better at prediction until we actually make predictions.
I’m just looking for market odds. I’d prefer to read the other side that you mention before I size my bets, but I listened to Chairman Powell’s reasoning every time he gives it. Watching Bloomberg’s videos. Listened to Buffett explain why he’s still holding stocks (low interest rates). Let me know if I should be listening to something else. I’m very happy to read the other side if someone is giving their credences.
I’m not quite sure about my bet structure. I’ve got my probability distribution, and I want to bet it against the market’s implied probability distribution in such a way that I’m maximizing long-run growth. Not sure how to do that other than run simulations. If there’s a formula, please let me know.
That’s a fair question. Culture is extremely important (e.g. certain cultural norms facilitate corruption and cronyism, which leads to slower annual increases in quality of life indices), but whether cancelling, specifically, is a big problem, I’m not sure.
Government demonstrably changes culture. At a minor level, drink-driving laws and advertising campaigns have changed something that was a cultural norm into a serious crime. At a broader level, you have things like communist governments making religion illegal and creating a culture where everyone snitches on everyone else to the police.
If we can influence government policy, which I think we can, we can influence culture. It’s probably much easier when most people aren’t questioning a norm (drink-driving, again, being a good example), but I think you’re right in this case: Since cancelling is fairly common to talk about, it’s probably much harder to change the general discourse (and the laws).
Well now I’m definitely glad I wrote “is not a new idea”. I didn’t know so many people had discussed similar proposals. Thank you all for the reading material. It’ll be interesting to hear some downsides to funding retrospectively.
I mentioned the Future of Life Institute which, for those who haven’t checked it out yet, does the “Future of Life” award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven’t listened to in a while but, when I was listening, they had some really interesting discussions.
I think you’re conflating moral value with value in general. People value their pets, but this has nothing to do with the pet’s instrumental moral value.
So a relevant question is “Are you allowed to trade off moral value for non-moral value?” To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There’s no “demandingness”. I don’t buy into the notions of “morally permissible” or “morally required”: These lines in the sand seem like sociological observations (e.g. whether people are morally repulsed by certain actions in the current time and place) rather than normative truths.
I do think having more focus on moral value is beneficial, not just because it’s moral, but because it endures. If you help a lot of people, that’s something you’ll value until you die. Whereas if I put a bunch of my time into playing chess, maybe I’ll consider that to be a waste of time at some point in the future. There’s other things, like enjoying relationships with your family, that also aren’t anywhere close to the most moral thing you could be doing, but you’ll probably continue to value.
You’re allowed to value things that aren’t about serving the world.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment “There’s also a lot of pseudo-superforcasting … without any evidence backing up those credences.” I didn’t say “without stating any evidence backing up those credences.” This is not a guess on my part. I’ve seen comments where they say explicitly that the credence they’re giving is a first impression, and not something well thought out. It’s fine for them to have a credence, but why should anyone care what your credence is if it’s just a first impression?
See Greg Lewis’s recent post; I’m not sure if you disagree.
I completely agree with him. Imprecision should be stated and significant figures are a dumb way to do it. But if someone said “I haven’t thought about this at all, but I’m pretty sure it’s true”, is that really all that much worse than providing your uninformed prior and saying you haven’t really thought about it?
I emailed Robin Hanson about my immigration idea in 2018. His post was in 2019. But to be fair, he came up with futarchy well before I started working on policy.
pay annual dividends proportional to these numbers
Doing things in proportion (rather than selling the full value) undervalues the impact of good forecasts. Since making forecasts has a cost, proportional payment (where the proportionality constant is not equal to 1) would generate inefficient outcomes: Imagine the contribution of the immigrant is $100 and it costs $80 to make the forecast, then paying forecasters anything less than 80% will cause poor outcomes.
No, I don’t think this is a problem. The prisons are competing against each other, not acting as a single, unified block. Why would a prison spend money on making something illegal (through lobbying) when they still have to outbid their opponents? Not only that, prisons would also have an additional liability to pay for their existing prisoners who might commit these new crimes after their release.
I would assume that for a private prison that has become good at its business the benefits of more inmates would outweigh the liabilities and that at some point it would (in principle, ignoring the free rider problem for a moment) become easier to increase the profits by increasing the revenue by making more things illegal than trying to reduce the reoffending rate.
Ignoring the free-rider problem (“problem” being from the perspective of the prison), as the prison gets more and more current/former inmates, it becomes harder for that cost-benefit calculation to make sense. With no change in the law or the performance of the prison, the prison’s liabilities will grow until the point at which the current/former inmates who die are are as numerous incoming inmates. So for lobbying to make financial sense, it would probably have to occur soon after the prison is started or soon after the system is implemented. But that time is also when the prison has the least information about their own competence (in terms of rehabilitation and auction pricing).
Also do administrators profit from more crimes in a public system? It of course increases the demand for administrators, but I don’t see how it would increase the salary of a significant number of them.
Not really, but that’s besides the point. The point is that they don’t benefit from rehabilitating their inmates. They don’t benefit from firing abusive guards. They don’t benefit from reading the latest literate on inmate rehabilitation and creating policies that reduce the chance of their inmates re-offending.
Does insurance contracts typically contain clauses for future “products”? I would have assumed that the insurance of the prison would only cover the damage of the point in time the contract was firmed.
I don’t know much about insurance, but I think you can write pretty much whatever contract you like, as long as no laws are broken.
The “Planck principle” seems more applicable to scientists who are strongly invested in a given hypothesis
Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory).
I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.
Yep, I agree.
Maybe I should have gone into why everyone puts anecdotes at the bottom of the evidence hierarchy. I don’t disagree that they belong there, especially if all else between the study types is equal. And even if the studies are quite different, the hierarchy is a decent rule of thumb. But it becomes a problem when people use it to disregard strong anecdotes and take weak RCTs as truth.
I think he’s saying “optimal future = best possible future”, which necessarily has a non-zero probability.
You mean the first part? (I.e. Why pay for lobbying when you share the “benefits” with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.
But, for this particular system, if a prison is large enough to lobby, then they’re going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison has to pay.
One way prisons could avoid this is by paying someone else to take on these liabilities. But, in the contract, this person could ensure the prison pays for compensation for any lobbying that damages them.
So a lobbying prison (1) benefits from more inmates in the future, (2) has to pay the cost of lobbying, and (3) has to pay more for the additional liabilities of their past and current inmates (not for their future inmates though, because the liabilities will be offset by a lower initial price for those inmate contracts). Points 1 and 2 are the same under the current prison system. Point 3 is new, and it should push in the direction of less lobbying, at least once the system has existed for a while.
Hey Bob, good post. I’ve had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism
The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that are atomized. For example, since “You pull the lever”, “The five live”, and “The one dies” are all elements of , you can string these into a larger statement that is also in : “You pull the lever, and the five live, and the one dies”. Therefore, each timeline contains a very large statement that uniquely identifies it within any finite subset of . However, timelines won’t be our unit of analysis because the statements they contain have no subjective empirical uncertainty.
This uncertainty can be incorporated by using a probability distribution of timelines, which we’ll call a forecast (). Though there is no uncertainty in the trolley problem, we could still represent it as a choice between two forecasts: guarantees (the pull-the-lever timeline) and guarantees (the no-action timeline). Since each timeline contains a statement that uniquely identifies it, each forecast can, like timelines, be represented as a set of statements. Each statement within a forecast is an empirical prediction. For example, would contain “The five live with a credence of 1”. So, the trolley problem reveals that you either morally prefer (denoted as ), prefer (denoted as ), or you believe that both forecasts are morally equivalent (denoted as ).
I watched those videos you linked. I don’t judge you for feeling that way.
Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don’t know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you’d be unintentionally prioritizing emotions ahead of helping the animals.
Another thing to keep in mind: When we train particular physical actions, we get better at repeating that action. Athletes sometimes repeat complex, trained actions before they have any time to consciously decide to act. I assume the same thing happens with our emotions: If we feel a particular way repeatedly, we’re more likely to feel that way in future, maybe even when it’s not warranted.
We can be motivated to do something good for the world in lots of different ways. Helping people by solving problems gives my life meaning and I enjoy doing it. No negative emotions needed.
EA epistemology is weaker than expected.
I’d say nearly everyone’s ability to determine an argument’s strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as “people make logic mistakes so you might have too”, rather than actually identifying the weaknesses in an argument. There’s also a lot of pseudo-superforcasting, like “I have 80% confidence in this”, without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually understanding how they work. Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.), but outside of that, I’d say we’re just as wrong as anyone else.
*Some meta-arguments are valid, like discussions on logical grounding of particular methodologies, e.g. “Falsification works because of the law of contraposition, which follows from the definition of logical implication”.
Thank you for saying this. It’s frustrating to have people who agree with you bat for the other team. I’d like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your “I’m not going to talk about political feasibility in this post” idea is a good one that I’ll use in future.
Poor meta-arguments I’ve noticed on the Forum:
Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because “I probably got 100 points, because that’s the average.”)
Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. “Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn’t mean it is”. A much better contribution is to actually point out the weaknesses in the specific argument that’s in front of you.)
And, as you say, predictions of infeasibility.