Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
Additional suggestion: don’t just have a photo of your vaccine card on your phone; physically bring it or scan and print a copy.
I noticed something at EAG London which I want to promote to someone’s conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:
1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn’t appeal to overweight peopleIt’s clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it’s also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donations each year in the US alone, an amount that dwarfs annual outlays to all effective causes. I think this topic has been covered on the forum before from the religion and ethnicity angles, but I haven’t seen it for other types of demographics.
If we’re somehow limiting participation to the 3/10ths of the population who are under 25 BMI, are we needlessly keeping out 7/10ths of the people who might otherwise work to effectively improve the world?
> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn’t OpenPhil just fund it?
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This is highly implausible. First of all, if it’s true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.
But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we’re really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.
When you’re working on global poverty, perhaps you’d want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.
For x-risks this seems totally implausible. What’s the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don’t think alignment progress has outpaced that. If time until AGI is limited and short then we’re actively falling behind. I don’t think their investments or effectiveness are increasing fast enough for this explanation to make sense.
Sorry, I didn’t mean to imply that biorisk does or doesn’t have “fast timelines” in the same sense as some AI forecasts. I was responding to the point about “if [EA organization] is a good use of funds, why doesn’t OpenPhil fund it?” being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.
If the financial capital is $46B and the population is 10k, the average person’s career capital is worth about ~$5M of direct impact (as opposed to the money they’ll donate)? I have a wide confidence interval but that seems reasonable. I’m curious to see how many people currently going into EA jobs will still be working on them 30 years later.
I thought the person-affecting view only applies to acts, not states of the world. I don’t hold the PAV but my impression was that someone who does would agree that creating an additional happy person in World A is morally neutral, but wouldn’t necessarily say that Worlds B and C aren’t better than World A.
I’ve heard this for e.g. server uptime as well.
I disagree with your point about participants not being cautious enough about covid. Last I heard (someone correct me if this was later updated), four attendees tested positive during or after the conference, out of about 900 participants. That is an impressively low rate, and indicates that the safety measures worked well! I want to commend the organizers for doing a great job addressing covid issues: they had lots of rapid tests available for us, gave lots of advice about travel safety, and didn’t do anything excessive or unwarranted by the risk level, like cancelling the conference or social-distancing the discussions.
Good point, thanks! I added a note in the FAQ.
Yes, that’s correct. Optimal usage of that offer is to deposit $1,500, use the free bet on an unlikely event that resolves very soon, and then withdraw your deposit quickly, plus winnings if any.
I’d guess a couple more weeks. The Caesars one was just reduced from $3000 to $1500. They were most generous right at the beginning of legal online betting but are decreasing the incentives as everyone who will end up signing up has done so.
Oh, good catch, I will edit that. What is the EV from those offers, then? It seems to still be nearly +$1000 and nearly risk-free with the following approach: bet $1000 on something very unlikely (EV of the payout will be $1000, but as something like a 1% chance of $100000). If you lose, bet the $1000 of site credit on something extremely likely, so you end up with $1000 cash. Then withdraw that.
For BetMGM, repeat the same approach but do 5x very safe bets of $200 with the site credits.
That still works for $1000 expected profit, right?
- 26 Jan 2022 18:00 UTC; 1 point) 's comment on Free money from New York gambling websites by (
This pays far too little to the winners to make it worthwhile to have any money in this. It wouldn’t have much more liquidity than a moneyless prediction book.
Correct. They might ban you and confiscate your money if you use a VPN to obfuscate your location.
Physical presence is enough. They have geolocation applets on their site and prohibit you from placing bets unless you are in an eligible location. I live in Massachusetts but went to NYC for about a day. I used my passport for identification because I didn’t bring my driving license.
Yes, this was something I was thinking earlier. I was in an ideal position to take advantage of the offers without getting hooked because: I hate sports, have good background knowledge of probability, don’t like gambling, and don’t live in NY and thus couldn’t make any more bets even if I wanted to. If I’m the one taking these offers rather than a NY resident who might end up with a gambling problem, that’s a very good social outcome in addition to the donation directed to charity.
Why is it optimal to size the hedge bet such that you get the same payout for either outcome? Why does that have greater EV than if the bets are skewed in either direction?
I used the maximum EV you can get without risking any of your own money. If you want to have less exposure to outcomes, you can choose lower-EV bets with higher likelihood of payouts, but if you’re doing this for charity, it doesn’t really make sense to do that.
For example, suppose you have a $1000 free bet. It doesn’t pay out the principal if you win, just returns payouts for whatever you bet on. You can bet on an outcome that is 91% likely, in which case you have a 91% chance of winning $100 and a 9% chance of winning nothing, for an EV of $91. Or you could bet on an outcome that is 1% likely, in which case you have a 1% chance of winning $100000 and a 99% chance of winning nothing, for an EV of $1000. If you’re donating your winnings to charity regardless of the outcome, you should do the riskier bet, but if you have sharply declining marginal utility of money, you might want the safer bet with lower average payout.
It is not negative EV. You are either misunderstanding the offer or someone’s example.
Thanks for the writeup! I’m following this process but going to the UK a few days earlier, so I’ll try this out and provide results before you leave.
I ordered a 2-day covid test and received a booking reference number. My flight arrives in London on Friday, so tomorrow morning I will fill out the passenger locator form.
Edit, 2021-10-20: Submitted all my info to the UK gov website and got a passenger locator form. I’ll update tomorrow when boarding the plane.
2021-10-21: will be departing from the US for the UK on Thursday evening.
2021-10-22: will be arriving in London on Friday morning.