If I had to guess, I would guess FLI, given their ability to at least theoretically use the money for grant-making. Though after Elon Musk’s $10 million, donation this cause area seems to be short on room for more funding.
TopherHallquist
Thanks for writing this, Michael. More people should write up documents like these. I’ve been thinking of doing something similar, but haven’t found the time yet.
I realized reading this that I haven’t thought much about REG. It sounds like they do good things, but I’m a bit skeptical re: their ability to make good use of the marginal donation they get. I don’t think a small budget, by itself, is strong evidence that they could make good use of more money. Can you talk more about what convinced you that they’re a good giving opportunity on the margin? (I’m thinking out loud here, don’t mean this paragraph to be a criticism.)
Re: ACE’s recommended charities. I know you know I think this, but I think it’s better for the health of ACE if their supporters divide their money between ACE and its recommended charities, even if the evidence for its recommended charities isn’t currently as strong as I’d like. But I admit this is based on a fuzzy heuristic, not a knock-down argument.
Re: MIRI. Setting aside what I think of Yudkowsky, I think you may be overlooking the fact that that “competence” is relative to what you’re trying to accomplish. Luke Muehlhauser accomplished a lot in terms of getting MIRI to follow nonprofit best practices, and from what I’ve read of his writing, I expect he’ll do very well in his new role as an analyst for GiveWell. But there’s a huge gulf between being competent in that sense, and being able to do (or supervise other people doing) ground breaking math and CS research.
Nate Soares seems as smart as you’d expect a former Google engineer to be, but would I expect him to do anything really ground breaking? No. Would I expect even the couple actual PhDs MIRI hired recently to do anything really ground breaking? They might, but I don’t see why you’d think it likely.
In a way, it was easier to make a case for MIRI back when they did a lot of advocacy work. Now that they’re billing themselves as a research institute, I think they’ve set a much higher bar for themselves, and when it comes to doing research (as opposed to advocacy) they’ve got much less of a track record to go on.
I was 12 when those demonstrations happened, and I’m a little fuzzy on the agenda of the protesters. I’m currently finishing up Stigliz’s Gobalization and its Discontents, which while critical of the IMF, also complaints about anti-globalization activists lobbying for more protectionist measures on the part of developed countries, against goods produced in developing countries. Do you have any idea if that applies to the Seattle protests?
Question about CGD: are they optimizing for making their proposals sound boring even though in fact they ideally want huge changes from the status quo? Or do they really just think we need tweaks to the status quo?
(This is based on a very superficial glance at their site, was already planning on trying to read more of their materials.)
Rich-country policy changes that could greatly benefit poor countries
Hmmm… let me put it this way: I suspect the right approach to dealing with the current situation in Ukraine is to back off there, while taking a hard line re: willingness to defend Baltic NATO states like Estonia. Truly sharp red lines are established by things like the NATO treaty, not [hawkish politician X] shooting his mouth off.
I know GiveWell is aware of these articles, and has looked more into nukes. Probably more conversation notes will be coming out.
This is good to know.
Why not support the existing organizations, which have people with a lifetime of experience, scholarly background, and political connections?
Do you have any specific organizations in mind? Existing anti-nuclear weapons orgs seem focused on disarmament–which seems extremely unlikely as long as Putin (or someone like him) is in power in Russia. And existing US anti-war orgs seem tragically ineffective. But maybe that’s because it’s just too hard to have an effective anti-war organization in current US political context.
Partly, I was thinking of an org focused on achievable, narrowly defined actions, one that would fight say, a bill in Congress to provide arms to Ukraine, or authorize “limited” military intervention in eastern Europe, or raise a fuss when presidential candidates go a bit over the line in bellicose rhetoric (disincentivizing such rhetoric). Maybe there are already groups that do things like that–I admit I’ve only recently started trying to understand this area better.
Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it?
Crap, thanks. Forgot the forum uses Markdown rather than HTML.
I’ve been using my nominally-an-atheism-blog on Patheos for a lot of EA-related blogging, but this is sub-optimal given that lots of people find the ads and commenting system extremely annoying. My first post on the new blog is titled, The case for donating to animal rights orgs. I’m hoping that with a non-awful commenting system, we’ll get lots of good discussions there.
Seconded. The post seems to imply he’s setting up a non-profit for this purpose, but it would be nice to have details.
Is there any way this is a violation of the Amazon affiliates agreement?
I’ve come to think protein is somewhat over-rated as a concern for vegans. Unless you’re trying to be a body builder, I think it’s pretty easy to get enough protein through the sources mentioned in the OP (cereals and legumes are complimentary in terms of their amino acid content).
Yes, hence “or foods fortified with them.” I don’t particularly like soymilk, but sometimes drink calcium-fortified orange juice.
How a lazy eater went vegan
Somewhat echoing atucker: the moral ideas behind effective altruism have been around for a long time, but are also quite contrarian and have never been widely embraced. But the moral ideas—even in a form pretty damn close to their current one, like Peter Singer’s writings in the 70s—aren’t enough to give you EA as we know it. You also need a fair amount of expertise to come up with a strong game plan for putting them into practice. Singer couldn’t have founded GiveWell, for example.
(One odd thing: as far as I know, Singer has never been involved in the nuclear disarmament movement. That would’ve seemed like the obvious existential risk to care about in the 70s or 80s.)
Nope. I bought Google, IBM, Microsoft, and a South American agribusiness company, all in an attempt to bet on guesses about long-term trends (information technology and maybe natural resources being really important). I’m unsure if this is a good idea—arguably I should focus on maximizing near-term expected returns—but it’s something I’m doing now. For reasons Paul gave, it’s at least no worse than investing in an index, but maybe I should have used the money for a larger Angel investment, I don’t know.
As someone who received a Large Sum of Money (as defined in this post) last year, here’s what I actually did with it:
spent some of it on my living expenses while I acquired skills to get a new job, then looked for a job
invested it in Vanguard index funds
donated a small portion of it
spent more of it on living expenses when I decided to quit my job and look for a new one
and most recently did a calculation regarding what I thought I could afford to lose, and invested that much money in individual stocks plus some in a startup founded by member of the rationalist community.
I want to emphasize that the last step here is something I cannot recommend anyone do with money they can’t afford to lose. But if your reason for saving/investing is to give later, being willing to make riskier investments makes sense, just as riskier careers do.
P.S. Perhaps I should be explicit that I find this post a bit odd, because the definition of Large Sum of Money used in this post doesn’t seem all that large to me. For Bay Area techies, it could easily mean “I am a Google employee and my RSUs just vested.”
They claim to be working on areas like game theory, decision theory, and mathematical logic, which are all well-developed fields of study. I see no reason to think those fields have lots of low-hanging fruit that would allow average researchers to make huge breakthroughs. Sure, they have a new angle on those fields, but does a new angle really overcome their lack of an impressive research track-record?
Do they have a stronger grasp of the technical challenges? They’re certainly opinionated about what it will take to make AI safe, but their (public) justifications for those opinions look pretty flimsy.