Regarding concerns about the name “effective altruist”: I think it’s good that the term “EA” is coming into coming use, at least in this community. Someday it may be so ingrained that thinking about the etymology will require conscious effort. As Steven Pinker points out, this is a common process in language. “Breakfast” means to “break fast”, but most people don’t think about this consciously.
Lila
Medical science liaison, as a high-paid career. It’s like a higher-end drug rep, typically requires a PhD. Good fallback for someone getting a PhD in something biomedical-related. Glassdoor estimates the average salary as $140K. (This is particularly striking given that other salaries seem to be lowballed on Glassdoor. Computer engineer is $71K, for example.)
It seems to be fairly easy to move back and forth between being an MSL and doing other jobs in the biotech/pharma industries.
Normally I think it’s better to donate later rather than now. (Partly because of potential returns to investing in stocks, but mostly because of the value of gaining new information.) However, this year I’ll be donating a significant chunk to Effective Altruism Outreach. I was convinced by their arguments that we’re on the cusp of a great opportunity to push EA, so there really is value to donating now.
In most cases I don’t see a compelling reason to fund an individual rather than an organization. I’m also worried about what kind of message this sends. To be crass, it makes EA look like a circlejerk.
Someone pointed out to me that long-term considerations dominate population ethics. So even if one places intrinsic value on population changes, the calculation might be dominated by how these changes affect the survival of humanity. Population increases may destabilize humanity due to competition for scarce resources. On the other hand, they may decrease the probability that every last human will die.
I’m finding AI claims more plausible over time. This is partly because I’ve learned more, but partly because I think SIAI/MIRI has gotten better at communicating. In the past I thought the language was silly and anthropomorphic, with terms like “friendly” and “unfriendly”, so the scenarios didn’t strike me as plausible. Even if AI writers were trying to convey reasonable ideas, the language was offputting for a superficial reader. I hope MIRI will continue to use clear, objective language and cut back on jargon and jokes.
My concern is that feedback and trust mechanisms aren’t good. Even the best of us I think would struggle to produce quality work without a boss and coworkers and deadlines. If organizations are actually just using gratipay to pay their employees without taxes, there are some legal concerns. People don’t take kindly to allegations of tax-dodging, and if something like this were to get out, it would probably hurt donations.
That’s what I was saying. The potential long-term population outweighs the effects of short-term population.
I’ll try not to descend into object-level here, but I’ll continue using your original example. I was anti-choice before I became an EA. EA has actually pushed me to believe, on a rational level, that abortion is of utilitarian value in many cases (though the population ethics are a bit unclear). However, on a visceral level I still find abortion deeply distasteful and awful. So I still “oppose” abortion, whatever that means. I think as an EA, it’s okay to have believies (as opposed to beliefs) that are comforting to you, as long as you don’t act on them. Many political issues fall into this category. Since I don’t vote on the issue of abortion and will probably not become accidentally pregnant, my believy isn’t doing any harm.
Believies:
In the Basu-Mitra result, when you use the term “Pareto”, do you mean strong or weak?
I found the section on possibility results confusing.
In this sentence you appear to use X and Y to refer to properties: “Basically, we can show that if < were a “reasonable” preference relation that had property X then it must also have property Y. (of course, we cannot show that < is reasonable.)”
But here you appear to use X and Y to refer to utility vectors: “For example, say that X<Y if both X and Y are finite and the total utility of X is less than that of Y.”
Did you duplicate variables, or am I misreading this?
General note: if you numbered your headings and subheadings, e.g. (1, 1.1, 1.1.1), it would make it easier to refer back to them in comments.
Hm, what would be learned from EA origin stories?
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn’t run into any asymptotic issues?
One thing I liked about this post is that it was written in English, instead of math symbols. I find it extremely hard to read a series of equations without someone explaining them verbally. Overall I thought the clarity was fairly good.
My impression was that the left is pro-immigrant (at least more so than the right) and the far right is very xenophobic.
Discomfort, maybe
I found the OP useful. If it were on LW, I probably wouldn’t have seen it. I don’t go on LW because there’s a lot of stuff I’m not interested in compared to what I am interested in (ethics). Is there a way to change privacy settings so that certain posts are only visible to people who sign in or something?
I’m trying to use Facebook less, and I don’t check the utilitarianism group, since it seems to have fallen into disuse.
I have to disagree that consequentialism isn’t required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that’s not consequentialism, I don’t know what is.
Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
During freshman year of college (’09-’10), I decided to donate some money to charity on a whim. After some reflection on how much to donate, I decided that the morally correct option was to live on as little money as possible and donate the remainder. (Extremism is very attractive to college freshmen.) I lived as ascetically as I could and gave away the possessions and money that I thought I didn’t need. I looked like a homeless person, with my feet sticking through the ends of my falling apart sneakers, and I sewed patches over the holes in my clothes rather than buy new stuff. (In retrospect, these things weren’t worth the time and reputation costs. Basic clothes and shoes are cheap.) I drank 7 cups of tea a day to avoid hunger and also would scrounge whatever half-eaten food I could find around my dorm. (Some of the “candy” I ate was actually psychedelic drugs, I think.)
As a coincidence, a few months after I began this endeavor, I met Jason (Gaverick) Matheny. My mom was working as an assistant for him, though neither of my parents are EAs. He came over for dinner one night, and we talked about our shared interest in altruism. (The term “effective altruism” didn’t exist at the time.)
Jason has been a valuable mentor for me over the years. I had the altruism part down, but he’s helped me think a lot more about effectiveness. He eventually introduced me to 80K, and from there I connected with the rest of the EA community.
I wonder how sensitive these fundraisers are to the identity of the organization. I would rather fundraise for GiveWell or CEA than deworming (obviously MIRI, FHI, etc. would be infeasible), but I imagine that would be a less popular choice for donations, since those organizations don’t read as “charity” in the same way.
I think it’s a good idea to summarize books like this, and I found it well-executed.