You can send me a message anonymously here: https://www.admonymous.co/will
WilliamKiely
Suggestion: Change the existing post title (“Looking for EA Community?”) to “EA Slack Groups” and include the new Slack groups (at the top, labeled as new) plus a collection of all the previously-existing ones people might be interested in joining.
A random idea on how this film could end by explicitly promoting existential risk awareness:
At the end of the film, imagine the comet impacts Earth and humanity goes extinct.
The audience is surprised that the movie actually ends in extinction and the heroes don’t win.
The otherwise-comedic film ends on this serious, sad note.
The screen goes dark, and the following text appears:
“Experts estimate the chance of extinction via asteroid or comet impact within the next 100 years at only ~1 in 1,000,000.”
“However, experts believe the chance of extinction from other causes is much higher.”
The film cuts from black to Toby Ord in his office, reading the caption of Table 6.1 from his book The Precipice: “My best estimates for the chance of an existential catastrophe from each of these sources occurring at some point in the next 100 years...”
The film cuts to the table itself as he begins reading the risks aloud:
“If you’re wondering if this is a joke, it’s not. The risks really do seem to be this high.”
Cut to credits.
Functionality I would like added to the Pledge Dashboard (note: I didn’t use MyGiving):
A comment field next to each donation. Currently I use the “Recipient” field to write the organization name plus extra notes I want to record (e.g. whether the donation was counter-factually matched).
The ability to see total amount I’ve given to each organization I’ve given to.
A way to label each donation as being associated with a certain cause area.
Bar chart of my donations over time, and chart of my donations per organization, and chart of my donations per cause area (by my labeling).
The ability to share one’s donation page with others.
Option B: Nuclear war that kills 99% of human beings
Option C: Nuclear war that kills 100% of humanity
He claims that the difference between C and B is greater than between B and A. The idea being that option C represents a destruction of 100% of present day humanity and all future value. But if we’re confident that future value will be fulfilled by aliens whether we destroy ourselves or not then there isn’t much of a jump between B and C.
While Grabby Aliens considerations may make our cosmic endowment much smaller than if the universe was empty (and I’d be interested in some quantitative estimates about that (related)) I think your conclusion above is very likely still false.[1]
The difference between C and B is still much greater than the difference between B and A. To see why, consider that the amount of value that humanity could create in our solar system (let alone in whatever volume of space in our galaxy or galactic neighborhood we could reach before we encounter other grabby aliens) in the near future in a post-AGI world completely dwarfs the amount of value we are creating each year on Earth currently.
[1] Technically you said “if we’re confident that future value will be fulfilled by aliens whether we destroy ourselves or not”, but that’s just begging the question, so I replied as if you had written “If we’re confident there are aliens only a few light-years away (with values roughly matching ours)” in that place instead.
I don’t know; I doubt it’s a problem where throwing money at it is the right answer. In any case, it’s unclear to me whether doing this would actually be positive value or not. I imagine it would be quite controversial, even among EAs who are into longtermism. I just shared the idea because I thought it was interesting, not because I necessarily thought it was good.
I don’t operate with this mindset frequently, but thinking back to some of the highest impact things I’ve done I’m realizing now that I did those things because I had this attitude. So I’m inclined to think it’s good advice.
Open Phil obviously has more information on this than members of the general public. Do we know whether Open Phil is willing to share any of its internal forecasts on this (or internal knowledge that could be used to create such forecasts) publicly?
TL;DR: This post didn’t address my concerns related to using WELLBYs as the primary measurement of how much an intervention increases subjective wellbeing in the short term, so in this comment I explain my reasons for being skeptical of using WELLBYs as the primary way to measure how much an intervention increases actual wellbeing.
~~~~In Chapter 9 (“Will the Future Be Good or Bad?”) of What We Owe the Future, Will MacAskill briefly discusses life satisfaction surveys and raises some points that make me very skeptical of HLI’s approach of using WELLBYs to evaluate charity cost-effectiveness, even from a very-short-term-ist hedonic utilitarian perspective.
Here’s an excerpt from page 196 of WWOTF, with emphasis added by me:We can’t assume that [the neutral wellbeing point] is the midpoint of the scale. Indeed, it’s clear that respondents aren’t interpreting the question literally. The best possible life (a 10) for me would be one of constant perfect bliss; the worst possible life (a 0) for me would be one of the most excruciating torture. Compared to these two extremes, perhaps my life, and the lives of everyone today, might vary between 4.9 and 5.1. [William Kiely note: It seems like the values should vary between 1.4-1.6, or around whatever the neutral point is, not around 5, which MacAskill is about to say is not the neutral point.] But, when asked, people tend to spread their scores across the whole range, often giving 10s or 0s. This suggests that people are relativising their answers to what is realistically attainable in their country or the world at present. A study from 2016 found that respondents who gave themselves a 10 out of 10 would often report significant life issues. One 10-out-of-10 respondent mentioned that they had an aortic aneurysm, had had no relationship with their father since his return from prison, had had to take care of their mother until her death, and had been in a horrible marriage for seventeen years.
The relative nature of the scale means that it is difficult to interpret where the neutral point should be, and unfortunately, there have been only two small studies directly addressing this question. Respondents from Ghana and Kenya put the neutral point at 0.6,while one British study places it between 1 and 2. It is difficult to know how other respondents might interpret the neutral point. If we take the UK survey on the neutral point at face value, then between 5 and 10 percent of people in the world have lives that are below neutral. All in all, although they provide by far the most comprehensive data on life satisfaction, life satisfaction surveys mainly provide insights in relative levels of wellbeing across different people, countries, and demographics. They do not provide much guidance on people’s absolute level of wellbeing.
Some context on the relative value of different conscious experiences:
Most people I have talked to think that the negative wellbeing experiences they have had tend to be much worse than their positive wellbeing experiences are good.
In addition to thinking this about typical negative experiences compared to typical positive experiences, most people I talk to also seem to think that the worst experience of their life was several times more bad than their best experience was good.
People I talk to seem to disagree significantly on how much better their best experiences are compared to their typical positive experience (again by “better” I mean only taking into account their own wellbeing, i.e. the value of their conscious experience). Some people I have asked say their best day was maybe only about twice as good as their typical (positive) day, others think their best day (or at least best hour or best few minutes) are many times better (e.g. ~10-100 times better) than their typical good day (or other unit of time).
In the Effective Altruism Facebook group 2016 poll “How many days of bliss to compensate for 1 day of lava-drowning?” (also see 2020 version here), we can see that EAs’ beliefs about the relative value of the best possible experience and the worst possible experience span many orders of magnitude. (Actually, answers spaned all orders of magnitude, including “no amount of bliss could compensate” and one person saying that even lava-burning is positive value.)
Given the above context, why 0-10 measures can’t be taken literally...
It seems to me that 0-10 measures taken from subjective wellbeing / life satisfaction surveys clearly cannot be taken literally.
That is, survey respondents are not answering on a linear scale. An improvement from 4 to 5 is not that same as an improvement from 5 to 6.
Respondents’ reports are not comparable to each others’. One person’s 3 may be better than another’s 7. One person’s 6 may be below neutral wellbeing, another person’s 2 may be above neutral wellbeing.
The vast majority of respondents’ answers presumably are not even self-consistent either. A “5” report one day is not the same as a “5″ report a different day, even for the same person.
If the neutral wellbeing point is indeed around 1-2 for most people answering the survey, and peoples’ worst experiences are much worse than their best experiences are good (as many people I’ve talked to have told me), then such surveys clearly fail to capture that improving someone’s worst day to a neutral wellbeing today is much better than making someone’s 2 day into a 10 day. That is, it’s not the case that an improvement from 2 to 10 is five times better than an improvement from 0 to 2 in many cases, as a WELLBY measurement would suggest. In fact, the opposite may be true (with the improvement from 0 to 2 (or whatever neutral wellbeing is) potentially being 5 times greater (or even more times greater) than an improvement from 2 to 10. This is a huge discrepancy and I think gives reason to think that using WELLBY’s as the primary tool to evaluate how much interventions increase wellbeing is going to be extremely misleading in many cases.
What I’m hearing from you
I see:
Lots of research has shown that subjective wellbeing surveys are scientifically valid (e.g. OECD, 2013; Kaiser & Oswald, 2022).
As a layperson note that I don’t know what this means.
(My guess (if it’s helpful to you to know, e.g. to improve your future communications to laypeople) is that the “scientifically valid” means something like “if we run an RTC in which we give a control group a subjective wellbeing survey and another group that we’re doing some intervention on to make them happier the same survey, we find that the people who are happier give higher numbers on the survey. Then later when we run this study again, we find consistent results with people giving higher scores in approximately the same range for the same intervention, which we interpret to mean that the self-reported wellbeing is actually a measurement of something real.)
Despite not being sure what it means for the surveys to be scientifically valid, I do know that I’m struggling to think of what it could mean such that it would overcome my concerns above about using subjective wellbeing surveys as the main measure of how much an intervention improves wellbeing.
Peoples’ 0-10 subjective wellbeing reports seem like they are somewhat informative about actual subjective wellbeing—e.g. given only information about two people’s self-reported subjective wellbeing I’d expect the wellbeing of the person with the higher reported wellbeing to be higher—but there are a host of reasons to think that 1-point increases in self-reported wellbeing don’t correspond to actual wellbeing being increased by some consistent amount (e.g. 1 util) and reading this post didn’t give me reason to think otherwise.
So I still think a cost-effectiveness analysis that uses subjective wellbeing assessments as more than just one small piece of evidence seems very likely to fail to identify what interventions actually increase subjective wellbeing the most. I’d be interested in reading a post from HLI that advocates for their WELLBYs approach in light of the sort of the concerns mentioned above.
- 26 Nov 2022 1:17 UTC; 4 points) 's comment on Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation by (
- 26 Nov 2022 1:29 UTC; 3 points) 's comment on Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation by (
TL;DR Update on my thoughts: I’ve updated significantly downwards on the probability that trying to make the Meta Giving Season match go very well for EAs is worthwhile and I am not investigating it further, pending the EA GT Team’s reply to me (I emailed Philip some more information).
More details:
After getting some data on recurring Facebook donation timestamps, it appears to me that getting matched will likely be a random lottery for anyone who sets up their recurring donation within the first few hours of the match, only slightly weighted towards those who set up their donations early.
Specifically, the data suggests that the second donation that goes through on December 15th will go through not at the exact same time as the first donation on November 15th, but at a random time in a ~7 hour window (based on 11 data points). That’s quite a bit of variation, which means a donor who donates in the first second on Nov 15th can get beaten by someone who donates a few hours later.
(This assumes that 70,000 recurring $100/mo donations will be set up within the first ~7 hours of the match. Given that ~$150M was donated in a single day on Giving Tuesday in years past, and that $7M in donations was made in the first couple seconds last year, this seems quite plausible to me, though not guaranteed. If the matching funds actually last for much longer (e.g. a full day or longer), then a donor probably can get matched with high probability by donating right at the beginning of the match.)
So I don’t think I can be confident that a bunch of EA donors setting up their donations right when the match begins will almost all get their second donations matched, and because of that I think it’s probably not worthwhile to put in the effort to try to get ~$100k-$1M matched by EAs. The strategy to do so would involve a lot of donation trades and would take a lot of organizer time, plus be asking a lot from donors, so I wouldn’t want to do it unless there was a high chance that a high fraction of EAs’ donations would actually get matched.
- EA Giving Tuesday Hibernation by 9 Feb 2023 2:04 UTC; 104 points) (
- Meta’s donation match starts at 9:00 AM EST today by 15 Nov 2022 9:34 UTC; 50 points) (
- 4 Nov 2022 18:31 UTC; 3 points) 's comment on Major update to EA Giving Tuesday by (
The fact that total EA funding increased substantially recently should cause me to update to believe that the marginal cost-effectiveness of donations I make now and over the course of my lifetime will be less than I previously thought, but not that much less cost-effective.
I’ve long felt that we’re nowhere to close to the world where the marginal cost-effectiveness of the best giving opportunities is low enough to mean it’s not worth donating altruistically. If we lived in a world where the best giving opportunity had GiveDirectly’s cost-effectiveness, I’d still find the giving opportunities cost-effective enough to want to donate a substantial amount of my money.
But the reality is we live in a world where GiveWell continues to find giving opportunities that are 7-8x more effective, and some giving opportunities in other cause areas seem to be 10-100x more cost-effective than GiveDirectly at the margin. So the small cost-effectiveness update above is not enough to make me doubt whether it’s actually worth it to me to donate. It still seems clearly worth it.
I don’t think people should do this because it seems like it is not in the spirit of the match.
Additionally, it would create a large risk of having the orgs removed from match eligibility (including retroactively removed). From the match terms and conditions:
Any activity not in the spirit of requesting separate individuals to support donations is being moderated by Every.org and Every.org reserves the right to permanently disqualify organizations for any attempts of misconduct. Every.org reserves the right to make final decisions on all matters concerning the allocation of the Incentive Fund.
- 3 Nov 2021 4:41 UTC; 22 points) 's comment on Make a $100 donation into $200 (or more) by (
Small donors can sometimes beat large donors in terms of cost-effectiveness, and I provide a list of some common ways to do this.
Another common way to do this that you didn’t mention: Small donors can use their donations to counterfactually direct matching funds offered by large non-EA donors to highly-effective nonprofits. Common counterfactual donation matches:
Employer donation matches
Every.org’s donation matches (two so far, more upcoming in expectation)
I’m not sure how much employer matching goes to EA-aligned nonprofits, but about $1m/year is currently counterfactually directed to EA-aligned nonprofits from the Facebook and Every.org matches. Counterfactual matching opportunities have existed consistently each year since at least 2017. Plausibly they may go away soon, but for the time being they are still exploitable at the margin and definitely offer a way that small donors can outperform large donors from a cost-effectiveness standpoint.
1. It’s a priori extremely unlikely that we’re at the hinge of history
Claim 1
I want to push back on the idea of setting the “ur-prior” at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.
(One note before that: I’m going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)
First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one’s prior that the current century is the hingiest century of the future must be at least as high as one’s credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.
I’ll come back to this idea when I propose my method of determining the prior, but first to critique yours:
The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future century was to estimate the expected number of centuries that civilization will survive (1,000,000) and then to try to “[restrict] ourselves to a uniform prior over the first 10%” of that expected number of centuries because “the number of future people is decreasing every century.”
(Note that while I think the adjustment from 10^-6 to 10^-5 is an adjustment for a good reason in the right direction, I think it can be left out of the prior: You can update on the fact that “the number of future people is decreasing every century” (and other things) later after determining the prior.)
Now to critique the method Will used of arriving at the 1 in 1,000,000 prior. It basically starts with an implicit probability distribution for when civilization is going to go extinct (good), but then compresses that into an average expected number of centuries that civilization is going to survive and (mistakenly) essentially assumes that civilization is going to last precisely that long. It then computes one over the average expected number of centuries to get the base rate that a given century is the hingiest (determining a base rate is good, but this isn’t the right way).
I propose that a better method is that one should start with the same implicit probability distribution for the expected lifespan of civilization, except make it explicit, and do the same base rate calculation but for each discrete possible length of civilization (1 century, 2 centuries, etc) instead of compressing the probability distribution for the expected lifespan of civilization into an average expected number of centuries.
That is, I’d argue that one’s prior that the current century is the hingiest century of the future should be equal to one’s credence that civilization will go extinct in the current century plus 1⁄2 times one’s credence that civilization will go extinct in the second century (since there will then be two possible centuries and we are calculating a base rate), plus 1⁄3 times one’s credence that civilization will go extinct in the third century (this is the third base rate we are summing), etc.
I’ve modeled an example of this here: https://docs.google.com/spreadsheets/d/1AqlfY47EmdcsE0D_uR4UlC3IuQbsCXLsq7YdtEqnyjg/edit?usp=sharing
From my “1000 Century Model”, assuming a 1% per century risk of extinction per century for 1000 centuries, the prior that the first century is the hingiest is ~4.65%.
From my “90% Likely to Survive 999 Centuries Model”, assuming a 10% chance of extinction in the first century, and a 0% chance of extinction every year thereafter until the 1000th century, and a 100% chance of extinction in the 1000th century, my method gives a prior of ~10.09% that the first century is the hingiest. On the other hand, since the expected number of centuries is ~900 years, MacAskill’s method gives an initial prior of ~0.111% and a prior of ~1.111% after “[restricting] ourselves to a uniform prior over the first 10% [of expected centuries]”. Both priors calculated using MacAskill’s method are below the 10% rate of extinction in the first century, which (I claim again) obviously means they are too low.
- 5 Sep 2019 18:46 UTC; 6 points) 's comment on Are we living at the most influential time in history? by (
Given these reasons (and others) it seems there may be value in letting people enter with their name attached, but not revealing that person as the winner if they win.
Is forecasting plausibly a high-value use of one’s time if one is a top-5% or top-1% forecaster?
What are the most important/valuable questions or forecasting tournaments for top forecasters to forecast or participate in? Are they likely questions/tournaments that will happen at a later time (e.g. during a future pandemic)? If so, how valuable is it to become a top forecaster and establish a track record of being a top forecaster ahead of time?
I’m willing to bet up to $100 at even odds that by the end of 2020, the confirmed death toll by the Wuhan Coronavirus (2019-nCoV) will not be over 10,000. Is anyone willing to take the bet?
Note that Toby Ord has long given 10% to global poverty. He doesn’t explain why in the linked interview despite being asked “Has that made you want to donate to more charities dealing on “long-termist” issues? If not, why not?”
My guess is that he intentionally dodged the question because the true answer is that he continues to donate to global poverty charities because he thinks the signaling value of him donating to global poverty charities is greater than the signaling value of him donating to longtermist charities and yet saying this explicitly in the interview would likely have undermined some of that signaling value.
In any case, I think those two things are true, and think the signaling value represents the vast majority of the value of his donations, so his decision seems quite reasonable to me, even assuming there are longtermist giving opportunities available to him that offer more direct impact per dollar (as I believe).
For other small donors whose donations are not so visible, I still think the signaling value is often greater than the direct value of the donations. Unlike in Toby Ord’s case though, for typical donors I think the donations with the highest signaling value are usually the donations with the highest direct impact.
There are probably exceptions though, such as if you often introduce effective giving to people by talking about how ridculously inexpensive it is to save someone’s life. In that case, I think it’s reasonable for you to donate a nontrivial amount (even up to everything you donate, potentially) to e.g. GiveWell’s MIF even if you think the direct cost-effectiveness of that donation is less, since the indirect effect of raising the probability of getting the people you talk to into effective giving and perhaps eventually into a higher impact career path can plausibly more than make up for the reduced direct impact.
An important consideration related to all of this that I haven’t mentioned yet is that large donors (e.g. Open Phil and FTX) could funge your donations. I.e. You donate more to X so they donate less to it and more to the other high impact giving opportunities available to them, such that the ultimate effect of your donation to X is to only increase the amount of funding for X a little bit and to increase the funding for other better things mpre. I don’t know if this actually happens, though I often hope it does.
(For example, I hope it does whenever I seize opportunities to raise funds for EA nonprofits that are not the nonprofits that I believe will use marginal dollars most cost-effectively. E.g. During the last every.org donation match I directed matching funds to 60+ EA nonprofits due to a limit on the match amount per nonprofit despite thinking many of those nonprofits would use marginal funds less than half as cost-effectively as the nonprofits that seemed best to me. My hope was that this was the right call, i.e. that large EA funders would correct the allocation to EA nonprofits by giving less to the nonprofits I gave to and more to the giving opportunities that had highest cost-effectivess than they otherwise would have, thereby making my decision the right call.)
Free $50 Charity Gift Card available now! 30 seconds to sign-up: https://redefinegifting.tisbest.org/ (Update: Still available as of 3:30pm EST, 2.5 hours later)
While reading this post a few days ago I became uncomfortably aware of the fact that I made a huge ongoing mistake over the last couple years by letting myself not put in a lot of effort into developing and improving my personal career plans. On some level I’ve known this for a while, but this post made me face this truth more directly than I had done previously.
During this period I often outright avoided thinking about my career plans even though I knew that making better career plans was perhaps the most important way to increase my expected impact. I think I avoided thinking about my career plans (and avoided working on developing them as much as it would have been best for me to) in part because whenever I tried thinking about my career plans I’d be forced to acknowledge how poorly I was doing career-wise relative to how well I potentially could have been doing, which was uncomfortable for me. Also, I often felt clueless about which concrete options I ought to pursue and I did not like the feeling of incompetence this gave me. I’d get stuck in analysis paralysis and feel overwhelmed without making any actual progress toward developing good plans.
It feels embarrassing to admit this, but for many months over the last few years I failed to make much or any progress on developing good career plans for myself. I should have reached out for help, even by saying something as simple as “I haven’t been making nearly as much progress on developing my career plans as I should be, and I think this is mostly because I haven’t been taking many actions that I probably should be taking. I’m not sure whether I just need some external accountability to help with my motivation or if there’s something else that’s holding me back, but in any case I clearly haven’t solved this myself and don’t expect to solve it myself anytime soon, so I likely need help from someone in order for me to make a lot of progress soon.” Instead I did not admit this to people whom it would have been helpful for me to admit this to. I was afraid to be seen as incompetent. The conversations I had with others about where I was at with figuring out my career plans were superficial and did not help get me out of my pattern of unproductivity. If I had been more honest with myself about what I was failing at and why, I probably would have made a lot more progress on developing my career plans a lot sooner, which would have increased my expected impact in expectation, perhaps by a large amount.
Your post helped me admit a lot of things along these lines to myself and I think it will help me to not neglect putting in work (including asking for help when I need it, even if it’s uncomfortable for me to do so) to improve my career plans in the future. So thank you very much for writing this and sharing it.