I think it’s helpful to just put aside the “EA Budget” thread for a moment; I think what Halstead was trying to get at is the idea/argument “If you are trying to maximize the amount of good you do (e.g., from a utilitarian perspective), that will (almost) never involve (substantive) donations to your local opera house, pet shelter, …” I think this is a pretty defensible claim. The thing is, nobody is a perfect utilitarian; trying to actually maximize good is very demanding, so a lot of people do it within limits. This might relate to the concept of leisure, stress relief, personal enjoyment, etc. which is a complicated subject: perhaps someone could make an argument that having a few local/ineffective donations like you describe is optimal in the long term because it makes you happier with your lifestyle and thus more likely to continue focusing on EA causes… etc. But “the EA (utilitarian) choice” would very rarely actually be to donate to the local opera house, etc.
Yes, I agree that when we are trying to maximise the amount of good we do with limited resources, these local charities are not likely to be a good target for donations. However, as you mention, EA is different from utilitarianism because we don’t believe everyone should use all or most of their resources to do as much good as possible.
So when we spend money on ourselves or others for reasons other than trying to maximise the good this might also include donations to local causes. It seems inconsistent to say that we can spend money on whatever we want for ourselves, but if we choose to spend money on others, it can’t be for those in our community.
My point was therefore about communication: it’s not correct to say that EAs should never donate to local causes, when what we mean is that donating to local causes is unlikely to bring about the most good (but people might have other reasons for doing so anyway).
Yes, I think this point is both important and underrated—we need to stop saying “don’t donate to your local theatre” or “don’t be a doctor” because actually those are very alienating statements that turn out to be bad advice a lot of the time
(I don’t know of a practical scenario where either of those turned out to be bad advice, and multiple times when it saved someone from choosing a career that would have been much worse in terms of impact, so I don’t think I understand why you think it’s bad advice. At least for people I know it seems to have been really good advice, at least the doctor part.)
I think there are a lot of people who are already doctors who can use that to do a lot of good, and there are some naive EAs who suggest they should drop their 25 years of medical experience to become a technical AI safety researcher. No! Maybe they should become a public health policy expert; maybe they should keep being a great doctor.
I also think a lot of people value their local community theatre and want it to continue—they enjoy it, it’s a hobby. If they and others donate, the theatre continues to exist, otherwise it doesn’t. I wouldn’t suggest they should become freeriders.
I do think anyone who has any decent shot at being an AI Safety researcher should probably stop being a doctor and try doing that instead. I do think that many people don’t fit that category, though some of the most prominent doctors in the community who quit their job (Ryan Carey and Gregory Lewis) have fit that bill, and I am exceptionally glad for them to have made that decision.
I don’t currently know of a reliable way to actually do a lot of good as a doctor. As such, I don’t know why from an impact perspective I should suggest that people continue being a doctor. Of course there are outliers, but as career advice goes, it strikes me as one of the most reliably bad decisions I’ve seen people make. It also seems from a personal perspective a pretty reliably bad choice, with depression and suicide rates being far above population average.
The ‘any decent shot’ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say ‘probably stop’.
There’s a motte/bailey thing going on here, where the motte is something like ‘AI safety researchers probably do a lot more good than doctors’ and the bailey is ‘all doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because that’s how bad being a doctor is’.
I don’t think we are making the world a better place by doing the second; where possible we should stick to ‘probably’ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often don’t do in person.
The “probably” there is just for the case of becoming an AI safety researcher. The argument for why being a doctor seems rarely the right choice does of course not just route through AI Alignment being important. It routes through a large number of alternative careers that seem more promising, many of which are analyzed and listed on 80k’s website. That is what my second paragraph was trying to say.
I think if you take into account all of those alternatives, the “probably” turns into a “very likely” and conditioning on “any decent shot” no longer seems necessary to me.
I don’t currently know of a reliable way to actually do a lot of good as a doctor.
I do know of such a way, but that might be because we have different things in mind when we say ‘reliably do a lot of good’.
Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy. If they might also do a great job as a quant trader, I would also suggest checking that out. But I doubt most doctors make good quant traders, so it might still be one of the best opportunities for them.
I am less familiar with this and therefore not confident, but there are also some specialisations Doctors without Borders have a hard time filling (while for some, there is an over-supply). I think this would be worth looking into, as well as other paths to deliver medical expertise in developing countries.
Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy.
Yeah, I do think this is plausible. When I last did a fermi on this I tended to overestimate the lifetime earnings of doctors because I didn’t properly account for the many years of additional education required to become one, which often cost a ton of money and of course replace potential other career paths during that same time, so my current guess is that while being a doctor is definitely high-paying, I think it’s not actually that great for EtG.
The key difference here does seem to be whether you are already past the point where you finished your education. After you finished med-school or maybe even have your own practice, then it’s pretty likely being a doctor will be the best way for you to earn lots of money, but if you are trying to decide whether to become a doctor and haven’t started med-school, I think it’s rarely the right choice from an impact perspective.
I want to point out that there’s something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK. So one of your examples is at least very weak, if not evidence for the opposite of what you claimed.
”A reliable way to actually do a lot of good as a doctor” doesn’t just mean not practicing; many doctors are in research, or policy, making a far greater difference—and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.
Huh, I didn’t have a sense that Greg’s medical degree helped much with his work, but could totally be convinced otherwise.
Thinking more about it, I think I also just fully retract Greg as an example for other reasons. I think for many other people’s epistemic states the above goes through, but I wouldn’t personally think that he necessarily made the right call.
I’m particularly annoyed by this because I’ve seen this play out in person—I’ve invited respected professionals to EA events who were seriously disrespected by people with dubious and overconfident ideas.
At least for people I know it seems to have been really good advice, at least the doctor part.
It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good—it doesn’t differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.
I’m generally leery of putting words in other people’s mouths, but perhaps people are using “bad advice” to mean different things, or at least have different central examples in mind.
There’s at least 3 possible interpretations of what “bad advice” can mean here:
A. Advice that, if some fraction of people are compelled to follow it across the board, can predictably lead to worse outcomes than if the advice isn’t followed.
B. Advice that, if followed by people likely to follow such advice, can predictably lead to worse outcomes than if the advice isn’t followed.
C. Words that can be in some sense considered “advice” that have negative outcomes/emotional affect upon hearing these words, regardless of whether such advice is actually followed.
Consider the following pieces of “advice”:
You should self-treat covid-19 with homeopathy.
You should eat raw lead nails.
#1 will be considered “bad advice” in all 3 interpretations (it will be bad if everybody treats covid-19 with homepathy(A), it will be bad if people especially susceptible to homeopathic messaging treat covid-19 with homeopathy(B), and also I will negatively judge someone for recommending self treatment with homeopathy(C)).
#2 is only “bad advice” in at most 2 of the interpretations (forcibly eating raw lead nails is bad(A), but realistically I don’t expect anybody to listen to such “recommendations” ( B), and this advice is so obviously absurd that context will determine whether I’d be upset about this suggestion (C)).
In context here, if Habryka (and for that matter me) doesn’t know any EA ex-doctors who regret no longer being a doctor (whereas he has positive examples of EA ex-doctors who do not regret this), this is strong evidence that telling people to not be doctors is good advice under interpretation B*, and moderate-weak evidence that it’s good advice under interpretation A.
(I was mostly reading “bad advice” in the context of B and maybe A when I first read these comments).
However, if David/Khorton interpret “bad advice” to mean something closer to C, then it makes more sense why not knowing a single person harmed by following such advice is not a lot of evidence for whether the advice is actually good or bad.
* I suppose you can posit a selection-effected world where there’s a large “dark matter” of former EAs/former doctors who quit the medical profession, regretted that choice, and then quit EA in disgust. This claim is not insane to me, but will not be where I place the balance of my probabilities.
Thanks this is very clear! Yes, I was thinking of outcome C—I’ve seen people decide not to get involved with the EA community because strangers repeatedly gave them advice they found offensive.
I think the world would be better if we didn’t regularly offend respected professionals, even if it’s been very helpful for 5 or 10 people—and I imagine those 5 or 10 people may have transitioned from medicine anyways when presented with the arguments without it being presented as quite such a definitive answer.
Yeah, I do think the selection effects here are substantial.
I do think I can identify multiple other very similarly popular pieces of advice that did turn out to be bad reasonably frequently, and caused people to regret their choices, which is evidence the selection effects aren’t completely overdetermining the outcome.
Concretely, I think I know of a good number of people who regret taking the GWWC pledge, a good number of people who regret trying to get an ML PhD, and a good number of people who regret becoming active in policy. I do think those pieces of advice are a bit more controversial than the “don’t become a doctor” advice within the EA Community, so the selection effects are less strong, but I do think the selection effects are not strong enough to make reasoning from experience impossible here.
I don’t know of a practical scenario where either of those turned out to be bad advice
(I don’t mean to pick too hard on this point, which is generally pointing at something true, but a counterexample sprang immediately to mind when I read it.)
I know one medical student who wound up perceiving EA somewhat negatively after reading 80K’s early writing on the perils of being a doctor. This person is still fairly value-aligned and makes EA donations, but I saw them engage with the community much less than I’d have otherwise expected, because they thought they would face judgment for their career path and choices. (Even without being an EA specialist, this person is smart and capable and could have made substantial community contributions.)
This person would almost certainly have had greater impact in EA-aligned operations or research, but they’d also dreamed of becoming a doctor since early childhood, and their relationship with their family was somewhat contingent on their following through on those dreams. (A combination of “Mom and Dad would be heartbroken if I chose a different career with no status in their community” and “I want to have a high-paying job so I can provide financial support to my family later”).
Hence the strong reaction to the idea of a movement where being a doctor was a slightly odd, suspicious thing to do (at least, that was their takeaway from the 80K piece, and I found the impression hard to shake).
This kind of story may be unusual, but I consider it to be one practical example of a time when the advice “don’t become a doctor” led to a bad result—though it’s arguable whether this makes it “bad advice” even in that one case.
Yeah, I feel like this should just be screened off by whether it is indeed good or bad career advice.
Like, if something is good career advice, I think we should tell people even if they don’t like hearing it, and if something is bad career advice, we should tell people that even if they really want it to be true. But that’s a general stance I seem to disagree with lots of EAs on, but at least for me, it isn’t very cruxy whether anyone didn’t like what that advice sounded like.
I don’t disagree with elements of this stance—this kind of career advice is probably strongly positive-EV to share in some form with the average medical student.
But I think there’s a strong argument for at least trying to frame advice carefully if you have a good idea of how someone will react to different frames. And messages like “tell people X even if they don’t like hearing it” can obscure the importance of framing. I think that what advice sounds like to people can often be decisive in how they react, even if the most important thing is actually giving the good advice.
Marginal effort on making the information present better is totally valuable, and there is of course some level of bad presentation where it should be higher priority to improve your presentation than your accuracy, but my guess is in this case we are far from the relevant thresholds, and generally would want us to value marginal accuracy as quite a bit higher than marginal palatableness.
we need to stop saying “don’t donate to your local theatre” … because actually [that is a] bad advice a lot of the time
I’m surprised you would say this—I would expect that not donating to a local theatre would have basically no negative effects for most people. I can see an argument for phrasing it more delicately—e.g. “I wouldn’t donate to a local theatre because I don’t think it will really help make the world a better place”—but I would be very surprised if it was actually bad advice. Most people who stop donating to a charity suffer essentially no negative consequences from doing so.
I don’t think donating to a theatre is done in order to “make the world a better place”, I think it’s done to be able to continue to have access to a community research you enjoy and build your reputation in your community. It’s actually a really bad idea for EAs to become known as a community of free riders.
And ultimately, it should be that person’s choice—if you don’t know much about their life, why would you tell them what part of their budget they should replace in order to increase donations to top causes? It’s better to donate 10% to effective charities and continue donating to local community organisations than to donate 10% to effective charities and spend the rest on fast food, in my view, but ultimately it’s none of my business!
I think it’s helpful to just put aside the “EA Budget” thread for a moment; I think what Halstead was trying to get at is the idea/argument “If you are trying to maximize the amount of good you do (e.g., from a utilitarian perspective), that will (almost) never involve (substantive) donations to your local opera house, pet shelter, …” I think this is a pretty defensible claim. The thing is, nobody is a perfect utilitarian; trying to actually maximize good is very demanding, so a lot of people do it within limits. This might relate to the concept of leisure, stress relief, personal enjoyment, etc. which is a complicated subject: perhaps someone could make an argument that having a few local/ineffective donations like you describe is optimal in the long term because it makes you happier with your lifestyle and thus more likely to continue focusing on EA causes… etc. But “the EA (utilitarian) choice” would very rarely actually be to donate to the local opera house, etc.
Yes, I agree that when we are trying to maximise the amount of good we do with limited resources, these local charities are not likely to be a good target for donations. However, as you mention, EA is different from utilitarianism because we don’t believe everyone should use all or most of their resources to do as much good as possible.
So when we spend money on ourselves or others for reasons other than trying to maximise the good this might also include donations to local causes. It seems inconsistent to say that we can spend money on whatever we want for ourselves, but if we choose to spend money on others, it can’t be for those in our community.
My point was therefore about communication: it’s not correct to say that EAs should never donate to local causes, when what we mean is that donating to local causes is unlikely to bring about the most good (but people might have other reasons for doing so anyway).
Yes, I think this point is both important and underrated—we need to stop saying “don’t donate to your local theatre” or “don’t be a doctor” because actually those are very alienating statements that turn out to be bad advice a lot of the time
(I don’t know of a practical scenario where either of those turned out to be bad advice, and multiple times when it saved someone from choosing a career that would have been much worse in terms of impact, so I don’t think I understand why you think it’s bad advice. At least for people I know it seems to have been really good advice, at least the doctor part.)
I think there are a lot of people who are already doctors who can use that to do a lot of good, and there are some naive EAs who suggest they should drop their 25 years of medical experience to become a technical AI safety researcher. No! Maybe they should become a public health policy expert; maybe they should keep being a great doctor.
I also think a lot of people value their local community theatre and want it to continue—they enjoy it, it’s a hobby. If they and others donate, the theatre continues to exist, otherwise it doesn’t. I wouldn’t suggest they should become freeriders.
I do think anyone who has any decent shot at being an AI Safety researcher should probably stop being a doctor and try doing that instead. I do think that many people don’t fit that category, though some of the most prominent doctors in the community who quit their job (Ryan Carey and Gregory Lewis) have fit that bill, and I am exceptionally glad for them to have made that decision.
I don’t currently know of a reliable way to actually do a lot of good as a doctor. As such, I don’t know why from an impact perspective I should suggest that people continue being a doctor. Of course there are outliers, but as career advice goes, it strikes me as one of the most reliably bad decisions I’ve seen people make. It also seems from a personal perspective a pretty reliably bad choice, with depression and suicide rates being far above population average.
The ‘any decent shot’ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say ‘probably stop’.
There’s a motte/bailey thing going on here, where the motte is something like ‘AI safety researchers probably do a lot more good than doctors’ and the bailey is ‘all doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because that’s how bad being a doctor is’.
I don’t think we are making the world a better place by doing the second; where possible we should stick to ‘probably’ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often don’t do in person.
The “probably” there is just for the case of becoming an AI safety researcher. The argument for why being a doctor seems rarely the right choice does of course not just route through AI Alignment being important. It routes through a large number of alternative careers that seem more promising, many of which are analyzed and listed on 80k’s website. That is what my second paragraph was trying to say.
I think if you take into account all of those alternatives, the “probably” turns into a “very likely” and conditioning on “any decent shot” no longer seems necessary to me.
I do know of such a way, but that might be because we have different things in mind when we say ‘reliably do a lot of good’.
Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy. If they might also do a great job as a quant trader, I would also suggest checking that out. But I doubt most doctors make good quant traders, so it might still be one of the best opportunities for them.
I am less familiar with this and therefore not confident, but there are also some specialisations Doctors without Borders have a hard time filling (while for some, there is an over-supply). I think this would be worth looking into, as well as other paths to deliver medical expertise in developing countries.
Yeah, I do think this is plausible. When I last did a fermi on this I tended to overestimate the lifetime earnings of doctors because I didn’t properly account for the many years of additional education required to become one, which often cost a ton of money and of course replace potential other career paths during that same time, so my current guess is that while being a doctor is definitely high-paying, I think it’s not actually that great for EtG.
The key difference here does seem to be whether you are already past the point where you finished your education. After you finished med-school or maybe even have your own practice, then it’s pretty likely being a doctor will be the best way for you to earn lots of money, but if you are trying to decide whether to become a doctor and haven’t started med-school, I think it’s rarely the right choice from an impact perspective.
Agree with all of the above!
I want to point out that there’s something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK. So one of your examples is at least very weak, if not evidence for the opposite of what you claimed.
”A reliable way to actually do a lot of good as a doctor” doesn’t just mean not practicing; many doctors are in research, or policy, making a far greater difference—and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.
Huh, I didn’t have a sense that Greg’s medical degree helped much with his work, but could totally be convinced otherwise.
Thinking more about it, I think I also just fully retract Greg as an example for other reasons. I think for many other people’s epistemic states the above goes through, but I wouldn’t personally think that he necessarily made the right call.
I’m particularly annoyed by this because I’ve seen this play out in person—I’ve invited respected professionals to EA events who were seriously disrespected by people with dubious and overconfident ideas.
It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good—it doesn’t differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.
I’m generally leery of putting words in other people’s mouths, but perhaps people are using “bad advice” to mean different things, or at least have different central examples in mind.
There’s at least 3 possible interpretations of what “bad advice” can mean here:
A. Advice that, if some fraction of people are compelled to follow it across the board, can predictably lead to worse outcomes than if the advice isn’t followed.
B. Advice that, if followed by people likely to follow such advice, can predictably lead to worse outcomes than if the advice isn’t followed.
C. Words that can be in some sense considered “advice” that have negative outcomes/emotional affect upon hearing these words, regardless of whether such advice is actually followed.
Consider the following pieces of “advice”:
You should self-treat covid-19 with homeopathy.
You should eat raw lead nails.
#1 will be considered “bad advice” in all 3 interpretations (it will be bad if everybody treats covid-19 with homepathy(A), it will be bad if people especially susceptible to homeopathic messaging treat covid-19 with homeopathy(B), and also I will negatively judge someone for recommending self treatment with homeopathy(C)).
#2 is only “bad advice” in at most 2 of the interpretations (forcibly eating raw lead nails is bad(A), but realistically I don’t expect anybody to listen to such “recommendations” ( B), and this advice is so obviously absurd that context will determine whether I’d be upset about this suggestion (C)).
In context here, if Habryka (and for that matter me) doesn’t know any EA ex-doctors who regret no longer being a doctor (whereas he has positive examples of EA ex-doctors who do not regret this), this is strong evidence that telling people to not be doctors is good advice under interpretation B*, and moderate-weak evidence that it’s good advice under interpretation A.
(I was mostly reading “bad advice” in the context of B and maybe A when I first read these comments).
However, if David/Khorton interpret “bad advice” to mean something closer to C, then it makes more sense why not knowing a single person harmed by following such advice is not a lot of evidence for whether the advice is actually good or bad.
* I suppose you can posit a selection-effected world where there’s a large “dark matter” of former EAs/former doctors who quit the medical profession, regretted that choice, and then quit EA in disgust. This claim is not insane to me, but will not be where I place the balance of my probabilities.
Thanks this is very clear! Yes, I was thinking of outcome C—I’ve seen people decide not to get involved with the EA community because strangers repeatedly gave them advice they found offensive.
I think the world would be better if we didn’t regularly offend respected professionals, even if it’s been very helpful for 5 or 10 people—and I imagine those 5 or 10 people may have transitioned from medicine anyways when presented with the arguments without it being presented as quite such a definitive answer.
Yeah, I do think the selection effects here are substantial.
I do think I can identify multiple other very similarly popular pieces of advice that did turn out to be bad reasonably frequently, and caused people to regret their choices, which is evidence the selection effects aren’t completely overdetermining the outcome.
Concretely, I think I know of a good number of people who regret taking the GWWC pledge, a good number of people who regret trying to get an ML PhD, and a good number of people who regret becoming active in policy. I do think those pieces of advice are a bit more controversial than the “don’t become a doctor” advice within the EA Community, so the selection effects are less strong, but I do think the selection effects are not strong enough to make reasoning from experience impossible here.
To be clear, I wasn’t aiming to criticize “don’t become a doctor”, but rather “don’t continue to be a doctor.”
(I don’t mean to pick too hard on this point, which is generally pointing at something true, but a counterexample sprang immediately to mind when I read it.)
I know one medical student who wound up perceiving EA somewhat negatively after reading 80K’s early writing on the perils of being a doctor. This person is still fairly value-aligned and makes EA donations, but I saw them engage with the community much less than I’d have otherwise expected, because they thought they would face judgment for their career path and choices. (Even without being an EA specialist, this person is smart and capable and could have made substantial community contributions.)
This person would almost certainly have had greater impact in EA-aligned operations or research, but they’d also dreamed of becoming a doctor since early childhood, and their relationship with their family was somewhat contingent on their following through on those dreams. (A combination of “Mom and Dad would be heartbroken if I chose a different career with no status in their community” and “I want to have a high-paying job so I can provide financial support to my family later”).
Hence the strong reaction to the idea of a movement where being a doctor was a slightly odd, suspicious thing to do (at least, that was their takeaway from the 80K piece, and I found the impression hard to shake).
This kind of story may be unusual, but I consider it to be one practical example of a time when the advice “don’t become a doctor” led to a bad result—though it’s arguable whether this makes it “bad advice” even in that one case.
Yeah, I feel like this should just be screened off by whether it is indeed good or bad career advice.
Like, if something is good career advice, I think we should tell people even if they don’t like hearing it, and if something is bad career advice, we should tell people that even if they really want it to be true. But that’s a general stance I seem to disagree with lots of EAs on, but at least for me, it isn’t very cruxy whether anyone didn’t like what that advice sounded like.
I don’t disagree with elements of this stance—this kind of career advice is probably strongly positive-EV to share in some form with the average medical student.
But I think there’s a strong argument for at least trying to frame advice carefully if you have a good idea of how someone will react to different frames. And messages like “tell people X even if they don’t like hearing it” can obscure the importance of framing. I think that what advice sounds like to people can often be decisive in how they react, even if the most important thing is actually giving the good advice.
Yep, I totally agree.
Marginal effort on making the information present better is totally valuable, and there is of course some level of bad presentation where it should be higher priority to improve your presentation than your accuracy, but my guess is in this case we are far from the relevant thresholds, and generally would want us to value marginal accuracy as quite a bit higher than marginal palatableness.
Strongly endorsed.
I’m surprised you would say this—I would expect that not donating to a local theatre would have basically no negative effects for most people. I can see an argument for phrasing it more delicately—e.g. “I wouldn’t donate to a local theatre because I don’t think it will really help make the world a better place”—but I would be very surprised if it was actually bad advice. Most people who stop donating to a charity suffer essentially no negative consequences from doing so.
I don’t think donating to a theatre is done in order to “make the world a better place”, I think it’s done to be able to continue to have access to a community research you enjoy and build your reputation in your community. It’s actually a really bad idea for EAs to become known as a community of free riders.
And ultimately, it should be that person’s choice—if you don’t know much about their life, why would you tell them what part of their budget they should replace in order to increase donations to top causes? It’s better to donate 10% to effective charities and continue donating to local community organisations than to donate 10% to effective charities and spend the rest on fast food, in my view, but ultimately it’s none of my business!
It has a negative effect on the local theater, but hopefully a positive effect on the counterfactual recipients of that money.