For every decision I’ve made, there’s a version where the other choice was made.
Is that actually something the many-worlds view implies? It seems like you’re conflating “made a choice” with “quantum split”?
(I don’t know any of the relevant physics.)
One group I’m especially interested in is people who were active in EA, took the GWWC pledge, and then drifted away (eg). This is a group that likely mostly didn’t take the EA Survey. I would expect that after accounting for this the actual fraction of people current on their pledges would be *much* lower.
Since we don’t know the fraction of people keeping their pledge to even the nearest 10%, the survey I would find most useful would be a smallish random sample. Pick 25 GWWC members at random, and follow up with them. Write personalized handwritten letters, place a phone call, or get a friend to contact them. This should give very low non-response bias, and also good qualitative data.
Other people being mislead is how I read “Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.”
I don’t think the post is correct in concluding that the current marginal cost-per-life-saved estimates are wrong. Annual malaria deaths are around 450k, and if you gave the Against Malaria Foundation $5k * 450k ($2.3B) they would not be able to make sure no one died from malaria in 2020, but still wouldn’t give much evidence that $5k was too low an estimate for the marginal cost. It just means that AMF would have lots of difficulty scaling up so much, that some deaths can’t be prevented by distributing nets, that some places are harder to work in, etc.
It does mean that big funders have seen the current cost-per-life saved numbers and decided not to give those organizations all the money they’d be able to use at that cost-effectiveness. But there are lots of reasons other than what Ben gives for why you might decide to do that, including:
You have multiple things you care about and are following a strategy of funding each of them some. For example, OpenPhil has also funded animal charities and existential risk reduction.
You don’t want a dynamic where you’re responsible for the vast majority of a supposedly independent organization’s funding.
You think better giving opportunities may become available in the future and want to have funds if that happens.
I agree the distribution would be interesting! But it depends how many such opportunities there might be, no? What about:
“Imagine that over time the low hanging fruit is picked and further opportunities for charitable giving get progressively more expensive in terms of cost per life saved equivalents (CPLSE). At what CPLSE, in dollars, would you no longer donate?”
I tried experience sampling myself for about a year and a half (intro, conclusion) and it made me much more skeptical of the system. I’m just not that sure how happy I am at any given point:
When I first started rating my happiness on a 1-10 scale I didn’t feel like I was very good at it. At the time I thought I might get better with practice, but I think I’m actually getting worse at it. Instead of really thinking “how do I feel right now?” it’s really hard not to just think “in past situations like this I’ve put down ‘6’ so I should put down ‘6’ now”.
I don’t have my phone ping me during the night, because I don’t want it to wake me up. Before having a kid this worked properly: I’d plug in my phone, which turns off pings, promptly fall asleep, wake up in the morning, unplug my phone. Now, though, my sleep is generally interrupted several times a night. Time spent waiting to see if the baby falls back asleep on her own, or soothing her back to sleep if she doesn’t, or lying awake at 4am because it’s hard to fall back asleep when you’ve had 7hr and just spent an hour walking around and bouncing the baby; none of these are counted. On the whole, these experiences are much less enjoyable than my average; if the baby started sleeping through the night such that none of these were needed anymore I wouldn’t see that as a loss at all. Which means my data is biased upward. I’m curious how happiness sampling studies have handled this; people with insomnia would be in a similar situation.
I agree that DALY/QALY measurements aren’t great either, though.
I think the internet shouldn’t run on ads. Making people pay for content ensures that the internet is providing real value rather than just clickbaiting
Before the internet you still had tabloids with shocking claims on the cover that, after you bought the paper and read it you realized the claims were overblown. If we moved away from ads the specific case of “you pay, and afterwards you realize you were baited” would still exist.
the dependence on advertising creates controversies where corporations compel content hosts to engage in dubious censorship.
The role of middlemen like Google diminishes this substantially. Since the advertisers and publishers aren’t talking directly to each other we end up with censorship only on the sort of thing that advertisers generally agree on: things like “adult or mature, copyrighted, violent, or hateful content”—AdSense policies: a beginner’s guide
Yes in theory people could always create and use paid websites, but there is too much inertia, both economically (network effects) and socially (people now feel very entitled to the Internet).
I’m not convinced this isn’t just “people don’t want to have to pay for things, and mostly don’t mind ads that much”. Newspapers, magazines, and cable TV both cost money and have ads. Analog radio sticks around on an ad-funded basis and people keep listening because it’s incredibly low friction.
The government can always shift tax and welfare policy to account for the additional financial burden on low income people.
Ok, but in practice the government mostly doesn’t do this. Figuring out how to get it to do this would open up a *ton* of valuable policies, but we also need to make reasonable choices in the present.
I’ve helped a few people negotiate salaries at tech companies, and my experience has been people always bring me in too late. You want to have multiple active offers at the same time so you can get them to bid against each other. For example, when I came back to Google I did:
Google made me an offer
Facebook beat Google’s offer
Amazon declined to match either offer
Google beats Facebook’s offer
Facebook beats Google’s offer
Google matches Facebook’s offer
The ideal for you is lots of back and forth, which is the opposite of what they want. They want to cut it short and will say things like “You’re asking for a lot, but I think might be able to get it for you if I talk to my boss. If we can do $X can you confirm you’ll accept it?” You want to be positive enough that they’ll come back with an offer of $X, but not so positive that you have no negotiating room left if they accept it.
These steps, to my knowledge, are completely unprecedented for CEA.
These steps, to my knowledge, are completely unprecedented for CEA.
I think CEA may have done something similar with Gleb, though for very different reasons: https://forum.effectivealtruism.org/posts/fn7bo8sYEHS3RPKQG/concerns-with-intentional-insights
(Peter has been one of several people continuing to argue “earning to give is undervalued, most orgs could still do useful things with more funding”.)
Jeff’s fundraiser for Google...
The post has:
For the past few years, Jeff Kaufman has led Google Cambridge’s EAs in successfully lobbying to direct that money toward GiveWell-recommended charities. At between a quarter-million and a half-million dollars each year, this may be the largest fundraising event for GiveWell charities in the world.
This is worded correctly but is a bit hard to interpret: I don’t organize the fundraiser, I help organize the EA participation in it. Overall it looks like:
Each year, for the week of Giving Tuesday, there’s a company wide system of fundraising for charities.
I coordinate EAs across the company in finding other EAs with compatible interests in their location/business unit and send out reminders about deadlines.
In the Cambridge office we have a bake-off where employees bake, sponsors put in some amount per good baked, other employees donate in order to taste them, and another set of sponsors matches these donations. The more you donate the more votes you get. This is the fundraiser the post talks about.
The bake-off organizers are people who think highly of GiveWell, partly related to the advocacy of Boston EAs, but I think don’t identify as EAs themselves. They make the decision about what charities the bake-off should feature, and have chosen GiveWell top charities for the past several years.
The bake-off is built around matching and sponsorship, especially that the donations people make to eat/vote are matched. That matching has been provided by Google Cambridge’s EAs, and one factor in the bake-off organizers choosing GiveWell charities is that we’ve been able to provide a large match pool.
It’s not clear how counterfactual any of this is. Each year when I publicize it internally part of what I talk about is that my match isn’t counterfactually valid, and I’ll be donating my share whether or not others also donate. I use it as a time to talk about why you shouldn’t expect matches like this to be counterfactual, and present it as “please join us in funding” and not “you can unlock extra funding”.
My model is that if you want to move from generic software engineering to safety work that these would be very good next steps.
I got the whole $20k: https://www.jefftk.com/p/facebook-donation-match
FB had a limit of $20k/donor this year, and I think that’s much more likely to go down than up. So depending how much you’re donating there’s not much reason to save more than than for Giving Tuesday.
There’s also the 1% PayPal match (plus 2% cash back) that’s been in December each year. At a 16%/year discount rate it’s worth waiting a couple months for that 3% but not all year.
“Trump signed a good law this week. Yes, really.” presents conflict: here’s a person who you usually expect to be doing harmful things, and here they are doing something good. It can’t make that hook without assuming something about their readers, and the hook draws people’s interest. It’s not an “unnecessary jibe”; it’s the sort of thing that draws far more interest than a headline like “Trump signed a good law about HIV this week.”
It’s not a tradeoff I would make in my writing, but Vox is a left-leaning outlet and it seems pretty reasonable to me for them to write for a left-leaning crowd.
The linear trend line in https://i.ibb.co/BgBkLZW/regression-graph.png looks like a poor match. Instead I’d model it as there being multiple populations, where one major population has a very steep trendline.
(Though the title with [Link] is only used on some views, for example not on the article-view page, so it’s somewhat confusing.)
A site that brings in money by showing ads generally makes under $10 per 1000 visits (CPM) so at most $0.01 per visit. Even if we make unrealistically positive assumptions (they’re getting very high CPMs, they donate 100% of the money, the money goes to charities that are as valuable as the AMF) then $10 to the AMF does as much good as visiting the Hunger Site daily for three years. With the same unrealistically positive assumptions, if this takes you 10s each time then you’re working for under $3.60/hr.
So I think this is probably not worth looking into further. Volunteering to look at ads just doesn’t bring in that much money so even if you got the best possible answers to your questions it wouldn’t make sense.
(Similarly, I don’t think trying to clone a site like this and run it targeted at GiveWell top charities would be worth it either.)
Who would you have recommended for these spots?
My not-that-informed view is something like “there are a bunch of problems with ACE, but I’m not sure there’s anyone better right now”. But if you have people in mind who would have been better for this role that would be really helpful to know!
You can extend your argument to even smaller probabilities: how much effort should go into this if we think the chance is 0.1%? 0.01? Or in the other direction, 50%, 90%, etc. In extremes it’s very clear that this should affect how much focus we put into averting it, and I don’t think there’s anything special about 1% vs 10% in this regard.
Another way of thinking about it is that AI is not the only existential risk. If your estimate for AI is 1% in the next ten years but pandemics is 10%, vs 10% for AI and 1% for pandemics, then that should also affect where you think people should focus.