GiveIndia says donations from India or the US are tax-deductible.
Milaap says they have tax benefits to donations but I couldn’t find a more specific statement so I guess it’s just in India?
Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x − 2x is accurate, it seems like for some donors that could make the difference.
(Siobhan’s comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).
You’ve previously spoken about the need to reach “existential security”—in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?
It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person’s “value drift” might be another person’s “conceptual innovation / development”.
On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing something effective because they were worried it was too “off-brand”, but it seems worth mentioning the latter as a possibility.
I think this post doesn’t explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren’t doing: potential information value.
This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it’s bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.
By contrast, naming your group Effective Altruism Erasmus wouldn’t have led to any new insights about group naming.
Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can’t IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.
Even in an extremely empirically grounded and verifiable theory like physics, for much of the history of the field, the dominant theoretical framework has had significant omissions or blind spots that would occasionally lead to faulty results when applied to areas that were previously unknown. Economic theory is much less reliable. I think you’re correct to highlight that economic data can be unreliable too, and it’s certainly true that many people overestimate the size of Bayesian updates based on shaky data, and should perhaps stick to their priors more. But let’s not kid ourselves about how good our cutting edge of theoretical understanding is in fields like economics and medicine – and let’s not kid ourselves that nonspecialist amateurs can reach even that level of accuracy.
I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more—but I wouldn’t say that creating more is wrong).
Moreover, I think this post misses the instrumental value of people, too. To understand the all-inclusive impact of an additional person on the environment, you surely have to also consider the chance that they become a climate researcher or activist, or a politician, or a worker in a related technical field; or even more indirectly, that they contribute to the social and economic environment that supports people who do those things. For sure, that social and economic environment supports climate damage as well, but deciding how these factors weigh up means (it seems to me) deciding whether human social and technological progress is good or bad for climate change, and that seems like a really tricky question, never mind all the other things it’s good or bad for.
The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.
This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don’t dispute the substance of the argument: it seems relatively difficult to claim that there’s a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.
I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in “First doses first”, you present some new evidence that the priors were right.
On nutrition and lockdowns, you haven’t convinced me that the point of view you’re defending isn’t the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.
On minimum wage and alcohol during pregnancy, you’ve presented a conflict between evidence and priors, but I don’t feel like you resolved the conflict: someone who believed the evidence proved the priors wrong won’t find anything in your examples to change their minds. For drinking during pregnancy, I’m not even really convinced there is a conflict: I suspect the heart of the matter is what people mean by “safe”, what risks or harms are small enough to be ignored.
I think in general there are for sure some cases where priors should be given more weight than they’re currently afforded. But it also seems like there are often cases where intuitions are bad, where “it’s more complicated than that” tends to dominate, where there are always more considerations or open uncertainties than one can adequately navigate on priors alone. I don’t think this post helps me understand how to distinguish between those cases.
I don’t know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)
Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P
For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you’re trying to prevent—invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.
I’m sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don’t take either of them very seriously most of the time anyway :)
I don’t buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it’s possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you’re in. Trading based on a belief that a particular industry is stronger / weaker than the market perceives it to be is surely fine; that’s basically what active investors do, right?)
(Some people believe the market is efficient even with respect to private information. I don’t understand those people.)
However, I have my own counterargument, which is that the “conflict of interest” claim seems just kind of confused in the first place. If you hear someone criticizing a company, and you know that they have shorted the company, should that make you believe the criticism more or less? Taking the short position as some kind of fixed background information, it clearly skews incentives. But the short position isn’t just a fixed fact of life: it is itself evidence about the critic’s true beliefs. The critic chose to short and criticize this company and not another one. I claim the short position is a sign that they do truly believe the company is bad. (Or at least that it can be made to look bad, but it’s easiest to make a company look bad if it actually is.) In the case where the critic does not have a short position, it’s almost tempting to ask why not, and wonder whether it’s evidence they secretly don’t believe what they’re saying.
All that said, I agree that none of this matters from a PR point of view. The public perception (as I perceive it) is that to short a company is to vandalize it, basically, and probably approximately all short-selling is suspicious / unethical.
Here are a couple of interpretations of value alignment:
A pretty tame interpretation of “value-aligned” is “also wants to do good using reason and evidence”. In this sense, distinguishing between value-aligned and non-aligned hires is basically distinguishing between people who are motivated by the cause and people who are motivated by the salary or the prestige or similar. It seems relatively uncontroversial that you’d want to care about this kind of alignment, and I don’t think it reduces our capacity for dissent: indeed people are only really motivated to tell you what’s wrong with your plan to do good if they care about doing good in the first place. I think your claim is not that “all value-alignment is bad” but rather “when EAs talk about value-alignment, they’re talking about something much more specific and constraining than this tame interpretation”. I’d be interested in whether you agree.
Another (potentially very specific and constraining) interpretation of “value alignment” that I understand people to be talking about when they’re hiring for EA roles is “I can give this person a lot of autonomy and they’ll still produce results that I think are good”. This recommends people who essentially have the same goals and methods as you right down to the way they affect decisions about how to do your job. Hiring people like that means that you tax your management capacity comparatively less and don’t need to worry so much about incentive design. To the extent that this is a big focus in EA hiring it could be because we have a deficit of management capacity and/or it’s difficult to effectively manage EA work. It certainly seems like EA research is often comparatively exploratory / preliminary and therefore underspecified, and so it’s very difficult to delegate work on it except to people who are already in a similar place to you on the matter.
Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here’s some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:
When the value of money changes depending on the different outcomes,
When the likelihood of people being able or willing to pay out on bets changes under the different outcomes.
As an example, I saw someone claim that the US was facing civil war. Someone else thought this was extremely unlikely, and offered to bet on it. You can’t make bets on this! The value of the payout varies wildly depending on the exact scenario (are dollars lifesaving or worthless?), and more to the point the last thing on anyone’s minds will be internet bets with strangers.
In general, you can’t make bets about major catastrophes (leaving aside the question of whether you’d want to), and even with non-catastrophic geopolitical events, the bet you’re making may not be the one you intended to make, if the value of money depends on the result.
A related idea is that you can’t sell (or buy) insurance against scenarios in which insurance contracts don’t pay out, including most civilizational catastrophes, which can make it harder to use traditional market methods to capture the potential gains from (say) averting nuclear war. (Not impossible, but harder!)
I don’t think this is a big concern. When people say “timing the market” they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)
While I think it’s important to understand what Scott means when Scott says eugenics, I think:
a. I’m not certain clarifying that you mean “liberal eugenics” will actually pacify the critics, depending on why they think eugenics is wrong,
b. if there’s really two kinds of thing called “eugenics”, and one of them has a long history of being practiced by horrible, racist people coercively to further their horrible, racist views, and the other one is just fine, I think Scott is reckless in using the word here. I’ve never heard of “liberal eugenics” before reading this post. I don’t think it’s unreasonable of me to hear “eugenics” and think “oh, you mean that racist, coercive thing”.
I don’t think Scott is racist or a white supremacist but based on stuff like this I don’t get very surprised when I find people who do.
I’m very motivated to make accurate decisions about when it will be safe for me to see the people I love again. I’m in Hong Kong and they’re in the UK, though I’m sure readers will prefer generalizable stuff. Do you have any recommendations about how I can accurately make this judgement, and who or what I should follow to keep it up to date?
Do you think people who are bad at forecasting or related skills (e.g. calibration) should try to become mediocre at it? (Do you think people who are mediocre should try to become decent but not great? etc.)
As someone with some fuzzy reasons to believe in their own judgement, but little explicit evidence of whether I would be good at forecasting or not, what advice do you have for figuring out if I would be good at it, and how much do you think it’s worth focusing on?
No one is going to run a prison for free—there has to be some exchange of money (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences.
In the predominant popular consciousness, this is not sufficient for the exchange to be moral. Buying a slave and treating them well is not moral, even if they end up with a happier life than they otherwise would have had. Personally, I’m consequentialist, so in some sense I agree with you, but even then, “consequences” includes all consequences, including those on societal norms, perceptions, and attitudes, so in practice framing effects and philosophical objections do still have relevance.
Of course there has to be an exchange of money, but it’s still very relevant what, conceptually or practically, that money buys. We have concepts like “criminal law” and “human rights” because we see benefits to not permitting everything to be bought or sold or contracted, so it’s worth considering whether something like this crosses one of those lines.
Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn’t help maximize societal contribution.
I agree that seems likely, but in my mind it’s not the main reason to prevent it, and treating it as an afterthought or a happy coincidence is a serious omission. If your prison system’s foundational goal doesn’t recognize what (IMO) may be the most serious negative consequence of prison as it exists today, then your goal is inadequate. Indirect effects can’t patch that.
As a concrete example, there are people that you might predict are likely to die in prison (e.g. they have a terminal illness with a prognosis shorter than their remaining sentence). Their expected future tax revenue is roughly zero. Preventing their torture is still important, but your system won’t view it as such.
Now that I’m thinking about it, I’m more convinced that this is exactly the kind of thing people are concerned about when they are concerned about commodification and dehumanization. Your system attempts to quantify the good consequences of rehabilitation, but entirely omits the benefits for the person being rehabilitated. You measure them only by what they can do for others – how they can be used. That seems textbook dehumanization to me, and the concrete consequence is that when they can’t be used they are worthless, and need not be protected or cared for.
As my other comment promised, here’s a couple of criticisms of your model on its own terms:
“If the best two prisons are equally capable, the profit is zero. I.e. criterion 3 is satisfied.” I don’t see why we should assume the best two prisons are equally capable? Relatedly, if the profit really is zero, I don’t see why any prison would want to participate. But perhaps this is what your remark about zero economic profit is meant to address. I didn’t understand that; perhaps you can elaborate.
Predicting the total present value of someone’s future tax revenue minus welfare costs just seems extremely difficult in general. It will have major components that are just general macroeconomic trends or tax policy projections. While you are in part rewarding people who manage to produce better outcomes, you are also rewarding people who are simply best able to spot already-existing good (or bad) outcomes, especially if you allow these things to be traded on a secondary market.
You say things like “whenever the family uses a government service, the government passes the cost on to the company” as if the costs of doing so are always transparent or easy (or wise) to track. I guess an easy example would be the family driving down a public road, which is in some sense “using a public service” but in a way that isn’t usually priced, and arguably it would be very wasteful to do so. Other examples are things like using public education, where it’s understood that the cost is worth it because there’s a benefit, but the benefit isn’t necessarily easy to capture for the company who had to pay for the education. Amount of tax paid on salary doesn’t reliably reflect amount of public benefit of someone doing their job, for a variety of reasons: arguably this is some kind of economic / market failure, but it is also undeniably the reality we live in. In essence, this is saying that many things are funded by taxation and not privately precisely because it’s difficult or otherwise undesirable to do this kind of valuation for them.
Once you’ve extended your suggestion to prisoners and immigrants, I think it’s worth asking why you can’t securitize anyone’s future “societal contributions”. One obvious drawback is that once this happens on a large enough scale, it starts distorting the incentives of the government, which is after all elected by people who are happy when taxes go down, but no longer raises (as much) additional revenue for itself when taxes go up.
In part, I think the above remark goes to the core of the philosophical legitimacy of taxation: it’s worth considering how the slogan “no taxation without representation” applies to people whose taxes go to a corporation that they have no explicit control over.