Patrick Brinich-Langlois
Patrick
This event isn’t happening (it was preponed to today, June 30; see here).
One thing I’d like to highlight: our per-hour pre-tax profits were closer to $150–200/hour than the $500/hour claimed in the post’s title. I think the main source of the discrepancy was that the author was significantly faster than us at finding and placing bets.
For my calculation, I factored in things such as trip planning and travel time (scaled by 50% because the trip included leisure activities), financial preparation, getting money out of accounts, taking screenshots of losses, filling out 2022 tax returns, filling out the spreadsheets where we tracked our bets, etc. I don’t know how many of these the author took into account in the $500/hour calculation.
Another factor in the lower per-hour figure was, as mentioned by Dmitriy, that the offers were worse.
Here’s an excerpt that from an email I sent the post’s author:
Some reasons we spent so much time:
We tried to find bets that satisfied multiple criteria (frequent ones were positive EV, long odds, the bet’s being offered by many books [to be more confident in the odds], and acceptance of the full amount of the bet by the sportsbook). If we’d relaxed some of these standards, things would’ve gone more quickly. Also, we probably could’ve used parleys more to get long-odds bets (we avoided them early on).
I didn’t feel comfortable making decisions on my own, so I often waited to run things by other people.
My phone internet connection was super slow (I eventually gave up and used Wi-Fi, which was fine because other people were using their phones).
It took a while for us to all get the geolocation software working on our computers (it required changing a security setting in macOS). One of us couldn’t use certain websites on his work computer because they required this software to be installed, so he had to use his phone instead.
Two of us used obscure banks, so we had trouble depositing money into certain sportsbooks. The solution we came to was to have someone else send us money on PayPal, since the transfers were instantaneous and our PayPal balances could be used.
The same two of us had to pay in cash at a 7-Eleven to deposit money into BetRivers .
We did rather elaborate bookkeeping, partly because we were doing profit sharing, so we needed to record more information than we would’ve if we’d been doing it on our own.
People other than me spent a lot of time on arbitrage (though this also increased our earnings, so it’s unclear whether it increased or decreased our hourly rate).
We read and saved the terms and conditions, and sometimes asked customer service for clarification.
We sometimes had to contact customer service to get our free bets credited or to remove restrictions from our accounts.
PointsBet’s points-betting offer was confusing, and a couple of us spent a significant amount of time modeling it.
[Question] Would you like to run the EARadio podcast?
I emailed CEA with some questions about the LTFF and EAIF, and Michael Aird (MichaelA on the forum) responded about the EAIF. He said that I could post his email here. Some of the questions overlap with the contents of this AMA (among other things), but I included everything. My questions are formatted as quotes, and the unquoted passages below were written by Michael.
Here are some things I’ve heard about LTFF and EAIF (please correct any misapprehensions):
You can apply for a grant anytime, and a decision will be made within a few weeks.
Basically correct. Though some decisions take longer, mainly for unusually complicated, risky, and/or large grants, or grants where the applicant decides in response to our questions that they need to revisit their plans and get back to us later. And many decisions are faster.
The application process is meant to be low-effort, with the application requiring no more than a few hours’ work.
Basically correct, though bear in mind that that doesn’t necessarily include the time spent actually doing the planning. We basically just don’t want people to spend >2 hours on actually writing the application, but it’ll often make sense to spend >2 hours, sometimes much more than 2 hours, on actual planning.
The funds don’t put many resources into evaluation, which is ad hoc and focuses on the most-controversial grants—the goal is to decide whether to make more such grants in the future. (Question: how do you decide whether a controversial grant was successful?) [Author’s note: I was unclear here—I was asking about post-hoc evaluation, but Michael’s answer is about evaluating grant applications.]
These statements seem somewhat fuzzy so it’s hard to say if I’d agree. Here’s what I’d say:
My understanding is that we tend to spend something like 1 hour per $10k of grants made. (I haven’t actually checked this, but I’m pretty sure it’d be the right order of magnitude at least.)
When I joined EAIF, I was surprised by that and felt kind-of anxious or uncomfortable about it, but overall I do think it makes sense.
We tend to spend more time on grants that are larger, have at first glance higher upside potential plus higher downside risk, and/or are harder to evaluate for some reason (e.g., they’re in areas the fund managers are less familiar with, or the plan is pretty complex).
I don’t think I’d say that what grants we focus more time on is driven by deciding whether to make more grants of that type in the future.
The typical grant is small and one-off (more money requires a new application), and made to an individual. Grants are also made to organizations, and these might be a little bigger but still on the small side (probably not more than $300k).
I guess this is about right, but:
“small” is ambiguous. Some specific numbers: Grants I’ve been involved in evaluating have ranged (if I recall correctly) from ~$5k to ~$400k, and there are two ~$250k grants I recommended and that were made. People can definitely apply for larger grants, but often it’d make more sense for another funder to evaluate and fund those.
We do make quite a few grants to organizations.
You could compile info on individuals vs orgs and on grant sizes from the public payout reports.
Your specific questions:
How many grants come through channels other than people applying unbidden (e.g., referrals/nominations by third parties or active grantmaking by fund managers)? What’s the most common such channel?
I don’t have these numbers (possibly someone else does), but I’d fairly confidently guess at least 10% of applicants whose applicants are approved had at an earlier point had someone (whether a fund manager or not) specifically encourage them to apply.
I’m not sure your question uses a useful way of carving up the space of possibilities. E.g., many people seem to apply in response to fund managers publicly or semi-publicly encouraging people in general to apply, e.g. via Forum posts or via posting in relevant Slack workspaces. E.g., many people presumably apply after 80k advisors or community builders encourage them to. I guess I mean it seems likely that some active promotion effort was involved in the vast majority of grants received, but the active promotion effort can vary a lot in terms of how targeted it is, who it’s from, etc.
The LTFF’s fund managers all have backgrounds in AI or CS. Is the process for evaluating grants in areas outside the managers’ areas of expertise any different?
I don’t know since I’m on the EAIF, but I’m also not sure this is quite the right question to ask. I don’t think it’s really like there’s a set of three different pre-specified processes that are engaged under different conditions; it’s more ad hoc than that. And there could be many AI/CS projects that are outside their area of expertise and many non-AI/CS projects inside their area of expertise (e.g., my understanding is that Oliver and Evan both have experience trying to do things like building research talent pipelines / infrastructure / mentorship structures, so they’d have some expertise relevant to projects focused on doing that for non-AI issues).
Another thing to note is that some guest fund managers earlier this year had other backgrounds.
I do think it can be problematic to have all fund managers have too narrow a range of areas of expertise and interest, and I think EA Funds sometimes arguably has that. But I also think this is mostly an unfortunate result of talent constraints. And I also think the guest manager system has helped mitigate this, and that the existing permanent fund manager’s areas of expertise isn’t super overlapping.
What’s the role of the advisers to the LTFF and EAIF listed on the website? Do managers commonly discuss grants with people not listed on the website (e.g., experts at other nonprofits)?
Advisors other than Nicole Ross are only involved in maybe something like 10% of grant evaluations, and usually just for quite quick input. They’re also sometimes involved in higher-level strategic questions, and sometimes they proactively suggest things (e.g., maybe we should reach out to X to ask if they want to apply to EA Funds or to ask if a larger grant would be useful since they seem to be relying on volunteers).
Nicole Ross checks recommended grants for possible issues of various kinds before the grant is actually made. I think it’s pretty rare that this actually changes a grant decision, but sometimes it results in further discussion with applicants that helps double-check or mitigate the potential issues.
Fund managers very often discuss grants with specific people not listed on the website. I’d guess that an average of ~3 external people are asked for input on each grant that ends up being approved. (Sometimes 0, often >5.) This is done in an ad hoc way based on the key uncertainties about that particular grant. We also explicitly ask that these consulted people keep the fact the applicant applied confidential.
What’s the process for a grant’s being approved or rejected? E.g., can a primary grant evaluator unilaterally reject a grant? Do grants have to be unanimously approved by all managers? Do all mangers have a say in all grants?
By default, all fund managers on a given Fund get 5 days in which to vote after a grant is put up for vote by the primary evaluator. Then the final decision is based on whether the average of the votes exceeds a particular threshold. On the EAIF, this average is a weighted average, with the primary evaluator having a weight of 2 by default and everyone else having a weight of 1 by default.
Usually only ~2 people actually give a vote, in my experience.
Usually the final decision is the decision the PI recommended.
Sometimes the voting period is shortened if a grant is time-sensitive.
Sometimes a given fund manager recuses themselves due to possible conflicts of interest, in which case they don’t vote and may also be removed from the doc with notes and such.
What are the motivations for having guest managers—increased capacity, identifying or training promising grantmakers, diversity of viewpoints?
This is discussed in some recent AMAs, if I recall correctly
We also now have an assistant fund manager on the EAIF, helping Buck with his evaluations. I personally think this is a great move, for all 3 reasons you mentioned, just as I think the guest fund manager role was a good thing to have created.
I know that sometimes you give feedback to unsuccessful grant recipients. What does this feedback look like—e.g., is it a 3-sentence email, or an arbitrarily long phone conversation with the primary evaluator?
Basically, either, or anything in between, though I think “arbitrarily long” seems unlikely—I’d guess it’s rarely or never been a >1 hour phone call.
What processes do you have to learn from mistakes or sub-optimal decisions?
We get reports from grantees on their progress etc. - though I don’t think we actually heavily use this to improve over time
I personally make forecasts relevant to most grants I recommend before the grants are made, and I plan to look back at them later to see how calibrated I was and what I can learn from that. I think some other people do this as well, but I think most don’t, and unfortunately I’ve come to feel that that’s reasonable given time constraints. (I think this is a shame, and that more capacity such that we could do that and various other things would be good, but there’s a severe talent constraint.)
There are various ad hoc and/or individual-level things
There may be things Jonas, fund chairs, and/or permanent fund managers do that I’m aware of
We’ve discussed whether and how to better evaluate our performance and improve over time, what we’d want to learn, etc. I think this is something people will continue to think more about. I personally expect there’s more we should be doing, but it’s not super obvious that that’s the case (there are also many other good things we could do if we were willing to spend extra hours on something new), nor precisely what it’d be best to do.
So I believe we’re simply not judging more recent art works by the same standards, resulting in a huge bias towards older works.
Why is it wrong to credit past art for innovations that have since become commonplace? If a musician’s innovations became widespread, I would count that as evidence of the musician’s skill. Similarly, Euclid was a big deal even though there are millions of people who know more math today than he did.
Beethoven is only noteworthy because his works are a cultural meme at this point—he was a great musician for his time, sure, but right now there’s probably tens of thousands of musicians who could make music of the same caliber straight on their laptops. Today’s Beethoven publishes his amazing tracks on SoundCloud and toils in obscurity.
This sounds like an extreme overstatement, at least if applied to classical music. Some modern classical music it is pretty good, and better than Beethoven’s less-acclaimed works. And the best of it is probably on par with Beethoven’s greatest hits. But much of it is unmemorable—premiered, then mercifully forgotten. The catalog of the Boston Modern Orchestra Project is representative of modern classical orchestral music, and I think most of it falls far short of Beethoven’s best symphonies. The concertgoing public strongly prefers the old stuff, to the consternation of adventurous conductors.
One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail.
If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it’s only one sentence plus a citation):
For instance, a simple threshold or plausibility assessment could protect the field’s resources and attention from being directed towards highly improbable or fictional events.
I would’ve found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here’s a relevant excerpt from the paper:
The TUA [techno-utopian approach] is a cluster of ideas which make up the original paradigm within which the field of ERS [existential-risk studies] was founded. We understand it to be primarily based on three main pillars of belief: transhumanism, total utilitarianism and strong longtermism. More precisely: (1) the belief that a maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living; (2) the failure to fully realise or have capacity to realise this potential value would constitute an existential catastrophe; and, (3) we have an overwhelming moral obligation to ensure that such value is realised by avoiding an existential catastrophe, including through exceptional actions.
Re patient philanthropy funds: Spending money on research rather than giving money to a fund does seem more focused and efficient. I think there are limits to how much progress you can make with research (assuming that research hasn’t ruled the idea out), so it does make sense to try creating such a fund at some point. Some issues would become apparent with even a toy fund (one with a minimal amount of capital produced as an exercise). A real fund that has millions of dollars would be a better test of the idea, but whether contributing to such a fund is a good use of money is less clear to me now.
In general, it kind of seems like the “point” of the lottery is to do something other than allocate to a capital allocator. The lottery is “meant” to minimise work on selecting a charity to give to, but if you’re happy to give that work to another allocator I feel like it makes less sense?
When I entered the lottery, I hadn’t given much thought to what I’d do if I won—I was convinced by the argument that giving to the lottery dominated giving to the LTFF (for example), since if I won the lottery I could just decide to give the money to the LTFF. I think you’re right that it makes less sense to enter the donor lottery if you think you’ll end up giving the money to a regranting organization, but I think it still makes some sense.
Lottery again! You could sponsor CEA to do a $1m lottery. If you thought it was worth it for $500k, surely it would be worth it for $1m!
Someone else suggested that to me a while ago, but I’m not sure how much it would change things—if I don’t have interesting ideas about what to do with $500k, I probably wouldn’t have interesting ideas about what to do with $1m. There would also be some overhead to setting up another lottery.
Be quite experimental, give largish grants to multiple young organisations, see how they do, and then direct your ordinary giving toward them in the future. This money can buy access to more organisations, and setup relationships for your future giving.
Thanks for suggesting that—it seems like an idea worth considering for at least a portion of the money.
What would you do if you had half a million dollars?
Thanks! Yes, they do.
velutvulpes, could you update the RSS link to point to https://feeds.buzzsprout.com/1755269.rss? I’m working on migrating to a new podcast host (Buzzsprout). The old feed currently redirects there, but my understanding is that it will stop redirecting after I complete the migration.
This isn’t your first EA podcast. This is not so much because the content is difficult, but because it has relatively low production value (it’s just EA conference talks in podcast format). The 80,000 Hours Podcast, Hear This Idea, and The FLI Podcast are more entertaining and polished while still being similarly informative, and I’d recommend listening to those first.
I will primarily focus on The case for strong longtermism, listed as “draft status” on both Greaves and MacAskill’s personal websites as of November 23rd, 2020. It has generated quite a lot of conversation within the effective altruism (EA) community despite its status, including multiple podcast episodes on 80000 hours podcast (one, two, three), a dedicated a multi-million dollar fund listed on the EA website, numerous blog posts, and an active forum discussion.
“The Case for Strong Longtermism” is subtitled “GPI Working Paper No. 7-2019,” which leads me to believe that it was originally published in 2019. Many of the things you listed (two of the podcast episodes, the fund, and several of the blog and forum posts) are from before 2019. My impression is that the paper (which I haven’t read) is more a formalization and extension of various existing ideas than a totally new direction for effective alturism.
The word “longtermism” is new, which may contribute to the impression that the ideas it describes are too. This is true in some cases, but many people involved with effective altruism have long been concerned about the very long run.
- 3 May 2021 14:33 UTC; 8 points) 's comment on Thoughts on “A case against strong longtermism” (Masrani) by (
On what principle is it that, when we see nothing but improvement behind us, we are to expect nothing but deterioration before us?
—Thomas Babington Macaulay
The good of any one individual is of no more importance, from the point of view (if I may so say) of the Universe, than the good of any other; unless, that is, there are special grounds for believing that more good is likely to be realized in the one case than in the other.
—Henry Sidgwick
Pain is always new to the sufferer, but loses its originality for those around him.
—Alphonse Daudet
A human being is a part of the whole, called by us “Universe,” a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest—a kind of optical delusion of his consciousness. The striving to free oneself from this delusion is the one issue of true religion. Not to nourish the delusion but to try to overcome it is the way to reach the attainable measure of peace of mind.
—Albert Einstein
I am he as you are he as you are me and we are all together.
—John Lennon and Paul McCartney
On what principle is it that, when we see nothing but improvement behind us, we are to expect nothing but deterioration before us? —Thomas Babington Macaulay
The good of any one individual is of no more importance, from the point of view (if I may so say) of the Universe, than the good of any other; unless, that is, there are special grounds for believing that more good is likely to be realized in the one case than in the other. —Henry Sidgwick
Pain is always new to the sufferer, but loses its originality for those around him. —Alphonse Daudet
A human being is a part of the whole, called by us “Universe,” a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest—a kind of optical delusion of his consciousness. The striving to free oneself from this delusion is the one issue of true religion. Not to nourish the delusion but to try to overcome it is the way to reach the attainable measure of peace of mind. —Albert Einstein
I am he as you are he as you are me and we are all together. —John Lennon and Paul McCartney
In addition to lowering the cost for readers, buying the rights to a book could allow certain improvements to be made.
The paperback version of Reasons and Persons is poorly typeset (the text is small and cramped) and unevenly printed (some parts are too light; others too dark). The form factor is close to that of a mass market paperback (short, narrow, and fat). The cover photo is bleak and blurry. These factors combine to make the book seem dated and unappealing.
On What Matters, a book with the same author and publisher, is a beautiful volume, and an example of what’s possible if someone puts some effort into the process and uses modern technologies.
Living High and Letting Die influenced me more than any other book. Unfortunately, it seems not to have been edited. Here’s a passage from the first page:
Now, you can write that address on an envelope well prepared for mailing. And, in it, you can place a $100 check made out to the U.S. Committee for UNICEF along with a note that’s easy to write.
I count two odd-sounding filler phrases (“well prepared for mailing” and “that’s easy to write”), one clearly superfluous comma (following “And” in the second sentence), and a bizarre choice to italicize the name of an organization. The whole book reads like it was dictated but not read. Another problem is that it gives unrealistically low estimates of the cost of saving a life.
Changing the text of a book might not always be feasible (you’d need the author’s buy-in, and many authors wouldn’t want to spend time helping to re-edit an old book), but it’s something worth exploring.
I’ve looked a bit at DAFs but the fees look quite high and I wonder if I could assemble something better myself.
By “quite high,” do you mean 0.6% per annum in addition to the mutual-fund expense ratio? That’s the fee charged by Vanguard, Fidelity, and Charles Schwab on the first $500k. To me, the benefits a DAF offers seem worth the price:
immediate tax-deductibility
untaxed dividends and interest
ease of granting (you don’t have to coordinate with the recipient to transfer appreciated assets)
pre-commitment (the money must go to a 501(c)(3) charity)
For people looking to invest millions of dollars, 0.6% would seem excessive. But larger accounts have lower fees. Here the fees for Vanguard’s “Select” accounts:
| First $500K | 0.60% | | Next $500K | 0.30% | | Next $29M | 0.13% | | Next $70M | 0.05% |
So a $100m account would cost $77,200 in DAF fees, plus the mutual-fund fee. That seems like a steal to me (although high-rollers might prefer something with more-flexible investment options).
The main reason I can think of not to use a DAF is that you think that there’s a high chance you’ll want to do something with the money other than donate it to a 501(c)(3).
If you haven’t read the article (as I hadn’t, since I came by a direct link to this comment), you should know that there’s exactly one sentence about algorithmic racial discrimination in the entire article. I was surprised that a single sentence (and one rather tangential to the article) generated this much discussion.
Whatever you think about the claim, it doesn’t seem like a sufficient reason not to recommend the article as an introduction to the subject.
XR: Extinction Rebellion, JSO: Just Stop Oil
(I wasn’t familiar with these abbreviations and it took me a minute to figure them out.)