“TisBest Redefine Gifting Giveaway” is an upcoming $2,000,000+ counterfactual free-money-for-charity opportunity with a limit of $50/person: https://www.tisbest.org/redefinegifting/
I’d estimate that ~100-300 people participated last year on behalf of EA nonprofits, counterfactually directing ~$5,000-$15,000 to EA nonprofits.
Last year’s giveaway (of 20,000 $50 gift cards, which followed a giveaway of 10,000 $100 gift cards that I missed) lasted ~9 hours.
My current 50% CI is that this year’s $2M giveaway will run out in ~2-10 hours.
I estimate EA participation typically took ~3-10 minutes per person.
Anyone reading this might as well sign up now to be notified via email when the giveaway begins if they’re interested in participating, but I’m skeptical it’s worth Forum readers’ time to create a frontpage post about this now. Doing so would mean taking up EAs’ time on two occasions: now to sign up and later to claim a gift card and donate it. Additionally, signing up to be notified of the giveaway now doesn’t guarantee that you get a $50 charity gift card—if you check your email infrequently, the 40,000 $50 gift cards may all be claimed by others before you see the notification email that the giveaway has begun. Though Ray Dalio (the main funder behind this) did mention in his newsletter that if the gift cards are all claimed quickly more may be given away:
Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
I’m flagging this as something that I’m personally unsure about and tentatively disagree with.
It’s unclear how much more MacAskill means by “much”. My interpretation was that he probably meant something like 2-10x more likely.
My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.
Full excerpt for those curious:
Will MacAskill: — most of the literature. I really wanted to just come in and be like, “Look, this is of huge importance” — because if it’s 50⁄50 when you lose 99% of the population whether you come back to modern levels of technology, that potentially radically changes how we should do longtermist prioritization. Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
Will MacAskill: And that’s just one of very many particular issues that just hadn’t had this sufficient investigation. I mean, the ideal for me is if people reading this book go away and take one little chunk of it — that might be a paragraph in the book or a chapter of it — and then really do 10 years of research perhaps on the question.
I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.
Something it occurred to me it might be useful to tell others about that I haven’t yet said anywhere:
The only donation I’ve really regretted making was one of the first significant donations I made: On May 23, 2017, I donated $3,181.00 to Against Malaria Foundation. It was my largest donation to date and my first donation after taking the GWWC pledge (in December 2016).
I primarily regretted and regret making this donation not because I later updated my view toward realizing/believing that I could have done more good by donating the money elsewhere (although that too is a genuine reason to feel regret about making a donation, and I have indeed since updated my view toward thinking other donation opportunities are better). Rather, I primarily regretted making the donation because six months after donating the money I learned that if I had saved that money and donated it instead on Giving Tuesday 2017, I could have gotten the money counter-factually matched by Facebook, thereby directing twice as much money toward the effective charity of my choice and doing almost twice as much good. (I say ‘almost’ as much good because I think a smaller but nontrivial amount of good would have been done by Facebook’s money had it gone to other nonprofits instead). (I in fact donated $4,000 on Giving Tuesday 2017 and got it all matched. I got all my donations matched in 2018 and 2019 too, and probably most of my donations in 2020, though matches have yet to be announced by Facebook. Other mistakes around this will go in a separate comment sometime.)
Reflecting on this more: Since I think marginal donations to some organizations do more than twice as much good as donations to other organizations (including AMF) in expectation, there is a sense in which missing a counterfactual matching opportunity was not as significant of a mistake as giving to the wrong giving opportunity / cause area. Yet on the other hand, regardless of what giving opportunity my 2017 self or current self might think is best, it’s pretty clear that my mistake is one that cut the impact I could definitely have had with the money almost in half. That is no small error, hence my clear feeling of regret, and my frequent mention of the post EAs Should Invest All Year, then Give only on Giving Tuesday.
I’m a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback—do I need to become a Patreon supporter or something?):
9:40 “If we really are early, we have an incredible opportunity to mold *thousands* or *even millions* of planets according to our visions and dreams.”—Why understate this? Kurzgesagt already made a video imagining humanity colonizing the Milky Way Galaxy to create a future of “a tredecillion potential lives” (10^42 people), so why not say ‘hundreds of billions of planets’ (the number of planets in the Milky Way), ‘or even more if we colonize other galaxies before other loud/grabby aliens reach them first’? This also seems inaccurate because the chance that we colonize between 1,000-9,999,999 planets (or even 1,000-9,999,999 planets) is less than the probability that we colonize >10 million (or even >1 billion) planets.
As an aside, the reason I watched these two videos just now was because I was inspired to look them up after watching the depressing new Veritasium video Do People Understand the Scale of the Universe? in which he shows a bunch of college students from a university with 66th percentile average SAT scores who do not know basic facts about the universe.
[1] The mistake I found was in the most recent video You Are The Center of The Universe (Literally) was that it said (9:10) that the diameter of the observable universe is 465,000 Milky Way galaxies side-by-side, but that’s actually the radius of the observable universe, not the diameter.
Thinking out loud about credences and PDFs for credences (is there a name for these?):
I don’t think “highly confident people bare the burden of proof” is a correct way of saying my thought necessarily, but I’m trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there’s an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly confident that the person they disagree with is wrong. So maybe this is another way of saying that “high confidence requires strong evidence”, but I think I’m saying more than that.
I’m observing that the high-confidence person needs an account of why the low-confidence person is wrong, whereas the opposite isn’t true.
Some math to help communicate my thoughts: The 0.3% credence person is necessarily at least 99% confident that a 30% credence is too high. Whereas a 30% credence is compatible with thinking there’s, say, a 50% chance that a 0.3% credence is the best credence one could have with the information available.
So a person who is 30% confident X is true may or may not think that a person with a 0.3% credence in X is likely reasonable in their belief. They may think that that person is likely correct, or they may think that they are very likely wrong. Both possibilities are coherent.
Whereas the person who credence in X is 0.3% necessarily believes the person whose credence is 30% is >99% likely wrong.
Maybe another good way to think about this:
If my point-estimate is X%, I can restate that by giving a PDF in which I give a weight for all possible estimates/forecasts from 0-100%.
E.g. “I’m not sure if the odds of winning this poker hand are 45% or 55% or somewhere in between; my point-credence is about 50% but I think the true odds may be a few percentage points different, though I’m quite confident that the odds are not <30% or >70%. (We could draw a PDF).”
Or “If I researched this for an hour I think I’d probably conclude that it’s very likely false, or at least <1%, but on the surface it seems plausible that I might instead discover that it’s probably true, though it’d be hard to verify for sure, so my point-credence is ~15%, but after an hour of research I’d expect (>80%) my credence to be either less than 3% or >50%.
Is there a name for the uncertainty (PDF) about one’s credence?
Some major uncertainties: (a) Risk of London getting nuked within a month conditional on each of these triggers (b) Value of a life today (i.e. willingness to pay to reduce risk of death in a world with normal levels of nuclear risk) (c) Value of a life in a post-London-gets-nuked world (i.e. willingness to pay to increase chance that Rob Wiblin survives London getting nuked)
(Note: (c) might be higher than b) if it’s the case that one can make more of a difference in the post-nuclear-war world in expectation.)
Using the 16 micromorts per month risk of death by nuke of staying in London estimate from March 6th[1] and assuming you’d be willing to pay $10-$100M[2] of your own money to avert your death (i.e. $10-$100 per micromort), that means on March 6th it would have made sense (while ignoring nuclear risk) to leave London for a month if you’d rather (taking into account altruistic impacts) leave London for a month than pay $160-$1,600 to stay for a month (or alternatively that you’d leave London for a month if someone paid you $160-$1,600 to do so).
I think that triggers 1-9 probably all increase the risk of London getting nuked to at least 2x what the risk was on March 6th, so assuming you’d be happy to leave for a month for $320-$3,200 (ignoring nuclear risk) (which seems reasonable to me if your productivity doesn’t take a significant hit), then I think I agree with your assessment of whether to leave.
However, it seems worth noting that for a lot of EAs working in London whose work would take a significant hit by leaving London, it is probably the case that they shouldn’t leave in some of the scenarios where you say they should (specifically the scenarios where the risk of London getting nuked would only be ~2 times higher (or perhaps ~2-10 times higher) than what the risk was on March 6th). This is because even using the $100 per micromort value of life estimate, it would only cost $3,200/166.7=$20 extra per hour for an EA org to hire their full-time employee at that significantly higher productivity, and that seems like it would be clearly worth doing (if necessary) for many employees at EA orgs.
It seems hard to imagine how an EA would be willing to pay $100 to reduce the risk of death of someone by one micromort (which increases the life expectancy of someone with a 50 year life expectancy by 0.438 hours and the expected direct work of someone with 60,000 hours of direct work left in their career by 0.06 hours) and not also be willing to pay $20 to increase the expected direct work they do by 1 hour. The only thing I’m coming up with that might make this somewhat reasonable is if one thinks one life is much more valuable in a post-nuclear-war world than in the present world.
It might also more sense to just think of this in terms of expected valuable work hours saved and skip the step of assessing how much you should be willing to pay to reduce your risk of death by one micromort (since that’s probably roughly a function of the value of one’s work output anyway). Reducing one’s risk of death by 16 micromorts saves ~1 hour of valuable work in expectation if that person has 60,000 hours of valuable work left in their career (16/(10^6)*60,000=0.96). If leaving would cost you one hour of work in expectation, then it wasn’t worth leaving assuming the value of your life comes entirely from the value of your work output. This also ignores the difference in value of your life in a post-nuclear-war world compared to today’s world; you should perform an adjustment based on this.
A possible story on how the value of a longtermist’s life might be higher in a post-London-gets-nuked world than in today’s world (from my comment replying to Ben Todd’s comment on this Google Doc):
--------
I think what we actually care about is value of a life if London gets nuked relative to if it doesn’t rather than quality-adjusted life expectancy.
This might vary a lot depending on the person. E.g. For a typical person, life after London gets nuked is probably worth significantly less (as you say), but for a longtermist altruist it seems conceivable that life is actually worth more after a nuclear war. I’m not confident that’s the case in expectation (more research is needed), but here’s a possible story:
Perhaps after a Russia-US nuclear war that leaves London in ruins, existential risk this century is higher because China is more likely to create AGI than the West (relative to the world in which nuclear war didn’t occur) and because it’s true that China is less likely to solve AI alignment than the West. The marginal western longtermist might make more of a difference in expectation in the post-war world than in the world without war due to (1) the absolute existential risk being higher in the post-war world and (2) there being fewer qualified people alive in the post-war world who could meaningfully affect the development of AGI.
If the longtermist indeed makes more of a difference to raising the probability of a very long-lasting and positive future in the post-war world than in the normal-low-risk-of-nuclear-war world, then the value of their life is higher in the post-war world, and so it might make sense to use >50 years of life left for this highlighted estimate. Or alternatively, saving 7 hours of life expectancy in a post-war world might be more like saving 14 hours of life in a world with normal low nuclear risk (if the longtermist’s life is twice as valuable in the post-war world).
“TisBest Redefine Gifting Giveaway” is an upcoming $2,000,000+ counterfactual free-money-for-charity opportunity with a limit of $50/person: https://www.tisbest.org/redefinegifting/
This giveaway happened last year as well: [Expired 2020] 20,000 Free $50 Charity Gift Cards.
I’d estimate that ~100-300 people participated last year on behalf of EA nonprofits, counterfactually directing ~$5,000-$15,000 to EA nonprofits.
Last year’s giveaway (of 20,000 $50 gift cards, which followed a giveaway of 10,000 $100 gift cards that I missed) lasted ~9 hours.
My current 50% CI is that this year’s $2M giveaway will run out in ~2-10 hours.
I estimate EA participation typically took ~3-10 minutes per person.
Anyone reading this might as well sign up now to be notified via email when the giveaway begins if they’re interested in participating, but I’m skeptical it’s worth Forum readers’ time to create a frontpage post about this now. Doing so would mean taking up EAs’ time on two occasions: now to sign up and later to claim a gift card and donate it. Additionally, signing up to be notified of the giveaway now doesn’t guarantee that you get a $50 charity gift card—if you check your email infrequently, the 40,000 $50 gift cards may all be claimed by others before you see the notification email that the giveaway has begun. Though Ray Dalio (the main funder behind this) did mention in his newsletter that if the gift cards are all claimed quickly more may be given away:
One-question poll for you: Is it worth EA Forum readers’ time to be told about this “free $50 charity gift card” opportunity now, later when it begins, or never?
Gift Cards are live now at https://www.tisbest.org/redefinegifting
Thanks, I made a post to try to increase visibility: https://forum.effectivealtruism.org/posts/68drEr2nfLhcJ3mTD/free-usd50-charity-gift-cards-takes-3-minutes-to-claim-one
(It’s still available after 3.5+ hours, hopefully will be for several more.)
Will MacAskill, 80,000 Hours Podcast May 2022:
I’m flagging this as something that I’m personally unsure about and tentatively disagree with.
It’s unclear how much more MacAskill means by “much”. My interpretation was that he probably meant something like 2-10x more likely.
My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.
Full excerpt for those curious:
I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.
#DonationRegret #Mistakes
Something it occurred to me it might be useful to tell others about that I haven’t yet said anywhere:
The only donation I’ve really regretted making was one of the first significant donations I made: On May 23, 2017, I donated $3,181.00 to Against Malaria Foundation. It was my largest donation to date and my first donation after taking the GWWC pledge (in December 2016).
I primarily regretted and regret making this donation not because I later updated my view toward realizing/believing that I could have done more good by donating the money elsewhere (although that too is a genuine reason to feel regret about making a donation, and I have indeed since updated my view toward thinking other donation opportunities are better). Rather, I primarily regretted making the donation because six months after donating the money I learned that if I had saved that money and donated it instead on Giving Tuesday 2017, I could have gotten the money counter-factually matched by Facebook, thereby directing twice as much money toward the effective charity of my choice and doing almost twice as much good. (I say ‘almost’ as much good because I think a smaller but nontrivial amount of good would have been done by Facebook’s money had it gone to other nonprofits instead). (I in fact donated $4,000 on Giving Tuesday 2017 and got it all matched. I got all my donations matched in 2018 and 2019 too, and probably most of my donations in 2020, though matches have yet to be announced by Facebook. Other mistakes around this will go in a separate comment sometime.)
Reflecting on this more: Since I think marginal donations to some organizations do more than twice as much good as donations to other organizations (including AMF) in expectation, there is a sense in which missing a counterfactual matching opportunity was not as significant of a mistake as giving to the wrong giving opportunity / cause area. Yet on the other hand, regardless of what giving opportunity my 2017 self or current self might think is best, it’s pretty clear that my mistake is one that cut the impact I could definitely have had with the money almost in half. That is no small error, hence my clear feeling of regret, and my frequent mention of the post EAs Should Invest All Year, then Give only on Giving Tuesday.
I’m a big fan of OpenPhil/GiveWell popularizing longtermist-relevant facts via sponsoring popular YouTube channels like Kurzgesagt (21M subscribers). That said, I just watched two of their videos and found a mistake in one[1] and took issue with the script-writing in the other one (not sure how best to give feedback—do I need to become a Patreon supporter or something?):
Why Aliens Might Already Be On Their Way To Us
My comment:
As an aside, the reason I watched these two videos just now was because I was inspired to look them up after watching the depressing new Veritasium video Do People Understand the Scale of the Universe? in which he shows a bunch of college students from a university with 66th percentile average SAT scores who do not know basic facts about the universe.
[1] The mistake I found was in the most recent video You Are The Center of The Universe (Literally) was that it said (9:10) that the diameter of the observable universe is 465,000 Milky Way galaxies side-by-side, but that’s actually the radius of the observable universe, not the diameter.
Found it! https://www.youtube.com/user/Kurzgesagt > click on ”and 7 more links” in the little bio > click on “View email address” > do the CAPTCHA (I’ve also DM’d it to you)
Every.org now has “Givelists”—I just had this one created: https://giveli.st/xrisk
Thinking out loud about credences and PDFs for credences (is there a name for these?):
I don’t think “highly confident people bare the burden of proof” is a correct way of saying my thought necessarily, but I’m trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there’s an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly confident that the person they disagree with is wrong. So maybe this is another way of saying that “high confidence requires strong evidence”, but I think I’m saying more than that.
I’m observing that the high-confidence person needs an account of why the low-confidence person is wrong, whereas the opposite isn’t true.
Some math to help communicate my thoughts: The 0.3% credence person is necessarily at least 99% confident that a 30% credence is too high. Whereas a 30% credence is compatible with thinking there’s, say, a 50% chance that a 0.3% credence is the best credence one could have with the information available.
So a person who is 30% confident X is true may or may not think that a person with a 0.3% credence in X is likely reasonable in their belief. They may think that that person is likely correct, or they may think that they are very likely wrong. Both possibilities are coherent.
Whereas the person who credence in X is 0.3% necessarily believes the person whose credence is 30% is >99% likely wrong.
Maybe another good way to think about this:
If my point-estimate is X%, I can restate that by giving a PDF in which I give a weight for all possible estimates/forecasts from 0-100%.
E.g. “I’m not sure if the odds of winning this poker hand are 45% or 55% or somewhere in between; my point-credence is about 50% but I think the true odds may be a few percentage points different, though I’m quite confident that the odds are not <30% or >70%. (We could draw a PDF).”
Or “If I researched this for an hour I think I’d probably conclude that it’s very likely false, or at least <1%, but on the surface it seems plausible that I might instead discover that it’s probably true, though it’d be hard to verify for sure, so my point-credence is ~15%, but after an hour of research I’d expect (>80%) my credence to be either less than 3% or >50%.
Is there a name for the uncertainty (PDF) about one’s credence?
My response on Facebook to Rob Wiblin’s list of triggers for leaving London:
--------
Some major uncertainties:
(a) Risk of London getting nuked within a month conditional on each of these triggers
(b) Value of a life today (i.e. willingness to pay to reduce risk of death in a world with normal levels of nuclear risk)
(c) Value of a life in a post-London-gets-nuked world (i.e. willingness to pay to increase chance that Rob Wiblin survives London getting nuked)
(Note: (c) might be higher than b) if it’s the case that one can make more of a difference in the post-nuclear-war world in expectation.)
Using the 16 micromorts per month risk of death by nuke of staying in London estimate from March 6th[1] and assuming you’d be willing to pay $10-$100M[2] of your own money to avert your death (i.e. $10-$100 per micromort), that means on March 6th it would have made sense (while ignoring nuclear risk) to leave London for a month if you’d rather (taking into account altruistic impacts) leave London for a month than pay $160-$1,600 to stay for a month (or alternatively that you’d leave London for a month if someone paid you $160-$1,600 to do so).
I think that triggers 1-9 probably all increase the risk of London getting nuked to at least 2x what the risk was on March 6th, so assuming you’d be happy to leave for a month for $320-$3,200 (ignoring nuclear risk) (which seems reasonable to me if your productivity doesn’t take a significant hit), then I think I agree with your assessment of whether to leave.
However, it seems worth noting that for a lot of EAs working in London whose work would take a significant hit by leaving London, it is probably the case that they shouldn’t leave in some of the scenarios where you say they should (specifically the scenarios where the risk of London getting nuked would only be ~2 times higher (or perhaps ~2-10 times higher) than what the risk was on March 6th). This is because even using the $100 per micromort value of life estimate, it would only cost $3,200/166.7=$20 extra per hour for an EA org to hire their full-time employee at that significantly higher productivity, and that seems like it would be clearly worth doing (if necessary) for many employees at EA orgs.
It seems hard to imagine how an EA would be willing to pay $100 to reduce the risk of death of someone by one micromort (which increases the life expectancy of someone with a 50 year life expectancy by 0.438 hours and the expected direct work of someone with 60,000 hours of direct work left in their career by 0.06 hours) and not also be willing to pay $20 to increase the expected direct work they do by 1 hour. The only thing I’m coming up with that might make this somewhat reasonable is if one thinks one life is much more valuable in a post-nuclear-war world than in the present world.
It might also more sense to just think of this in terms of expected valuable work hours saved and skip the step of assessing how much you should be willing to pay to reduce your risk of death by one micromort (since that’s probably roughly a function of the value of one’s work output anyway). Reducing one’s risk of death by 16 micromorts saves ~1 hour of valuable work in expectation if that person has 60,000 hours of valuable work left in their career (16/(10^6)*60,000=0.96). If leaving would cost you one hour of work in expectation, then it wasn’t worth leaving assuming the value of your life comes entirely from the value of your work output. This also ignores the difference in value of your life in a post-nuclear-war world compared to today’s world; you should perform an adjustment based on this.
[1] https://docs.google.com/document/d/1xrLokMs6fjSdnCtI6u9P5IwaWlvUoniS-pF2ZDuWhCY/edit
A possible story on how the value of a longtermist’s life might be higher in a post-London-gets-nuked world than in today’s world (from my comment replying to Ben Todd’s comment on this Google Doc):
--------
I think what we actually care about is value of a life if London gets nuked relative to if it doesn’t rather than quality-adjusted life expectancy.
This might vary a lot depending on the person. E.g. For a typical person, life after London gets nuked is probably worth significantly less (as you say), but for a longtermist altruist it seems conceivable that life is actually worth more after a nuclear war. I’m not confident that’s the case in expectation (more research is needed), but here’s a possible story:
Perhaps after a Russia-US nuclear war that leaves London in ruins, existential risk this century is higher because China is more likely to create AGI than the West (relative to the world in which nuclear war didn’t occur) and because it’s true that China is less likely to solve AI alignment than the West. The marginal western longtermist might make more of a difference in expectation in the post-war world than in the world without war due to (1) the absolute existential risk being higher in the post-war world and (2) there being fewer qualified people alive in the post-war world who could meaningfully affect the development of AGI.
If the longtermist indeed makes more of a difference to raising the probability of a very long-lasting and positive future in the post-war world than in the normal-low-risk-of-nuclear-war world, then the value of their life is higher in the post-war world, and so it might make sense to use >50 years of life left for this highlighted estimate. Or alternatively, saving 7 hours of life expectancy in a post-war world might be more like saving 14 hours of life in a world with normal low nuclear risk (if the longtermist’s life is twice as valuable in the post-war world).