Along with my co-founder, Marcus A. Davis, I run Rethink Priorities. I’m also a Grant Manager for the Effective Altruism Infrastructure Fund and a top forecaster on Metaculus. Previously, I was a professional data scientist.
Peter Wildeford
I think there’s a lot that’s intriguing here. I also really enjoyed the author’s prior takedown of “Why We Sleep”.
However, I need to throw a flag on the field for isolated demands of rigor / motivated reasoning here—I think you are demanding a lot from sleep science to prove their hypotheses about needing >7hrs of sleep but then heavily relying on an unproven analogy to eating (why should we think sleeping and eating are similar?), the sleep patterns of a few hunter-gatherers (why should we think what hunter-gatherers did was the healthiest?), the sailing coach guy (this was the most compelling IMO but shouldn’t be taken as conclusive), and a random person with brain surgery (that wasn’t even an RCT). If someone had the same scattered evidence in favor of sleep, there’s no way you’d accept it.
Maybe not sleeping doesn’t affect writing essays, but in the medical field at least there seems to at least be an increased risk of medical error for physicians who are sleep deprived. “I’m pretty sure this is 100% psyop” goes too far.
For what it’s worth (and it should be worth roughly the same as this blog post), my personal anecdotes:
1.) Perhaps too convenient and my data quality is not great and non-random, but I found analyzing a year of my time-tracking data showed that sleeping exactly 8hrs (not more not less) maximized my total hours worked (an imperfect but still useful metric of output).
2.) Multiple semi-sustained attempts of mine to regularly sleep <8hrs (including several attempts at polyphasic sleep) did not improve productivity.
3.) Sleeping <6hrs definitely gives me a feeling of “ugh I can’t do this because I’m too tired”, noticeable “brain fog”, and I also have noticeably less willpower. (Though I’ve not tried to long term “adapt”.)
4.) I would agree that oversleeping (>8hrs of sleep) harms my productivity though.
(Note: some of what I comment here is repeating the opinions of other people I talk to, but these people remain uncredited.)
Do you think it was a mistake to put “FTX” in the “FTX Future Fund” so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.
I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I’m skeptical this has worked in practice.
To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.
- How could we have avoided this? by 12 Nov 2022 12:45 UTC; 116 points) (
- 6 Feb 2023 17:37 UTC; 57 points) 's comment on Should EVF consider appointing new board members? by (
Thanks. Is this person still active in the EA community? Does this person still have a role in “picking out promising students and funneling them towards highly coveted jobs”?
If anyone has any neartermist community building ideas, I’d be happy to evaluate them at any scale (under $500K to $3M+). I’m on the EA Infrastructure Fund and helping fund more neartermist ideas is one of my biggest projects for the fund. You can contact me at peter@rethinkpriorities.org to discuss further (though note that my grantmaking on the EAIF is not a part of my work at Rethink Priorities).
Additionally, I’d be happy to discuss with anyone who wants seed funding in global poverty, neartermist EA community building, mental health, family planning, wild animal suffering, biorisk, climate, or broad policy and see how I can get them started.
On one hand it’s clear that global poverty does get the most overall EA funding right now, but it’s also clear that it’s more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.
- 6 Apr 2023 14:09 UTC; 16 points) 's comment on My updates after FTX by (
Note that it may be hard to give criticism (even if anonymous) about FTX’s grantmaking because a lot of FTX’s grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX’s goals and outputs) - I’m not sure how to navigate that trade-off, but it is important to acknowledge that it exists!
Will—of course I have some lingering reservations but I do want to acknowledge how much you’ve changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by “Doing Good Better”.
To get more personal—you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn’t very impactful and that I should consider 80,000 Hours career coaching instead, which I did.
You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn’t feel “depressed enough” (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.
You’re now an inspiration for me in terms of resilience. Having an impact journey isn’t always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you—but you persevere, smile, and continue to show your face. I like that and want to be like that too.
I am happy to see that Nick and Will have resigned from the EV Board. I still respect them as individuals but I think this was a really good call for the EV Board, given their conflicts of interests arising from the FTX situation. I am excited to see what happens next with the Board as well as governance for EV as a whole. Thanks to all those who have worked hard on this.
I think it’s especially confusing when longtermists working on AI risk think there is a non-negligble chance total doom may befall us in 15 years or less, whereas so-called neartermists working on deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon.
I also found this incredibly alarming and would be very keen to hear more about this.
Thanks Habryka for raising the bar on the amount of detail given in grant explanations.
- 19 Apr 2019 16:59 UTC; 11 points) 's comment on EA is vetting-constrained by (
I really wish we (as an EA community) didn’t work so hard to accidentally make earning to give so uncool. It’s a job that is well within the reach of anyone, especially if you don’t have unrealistic expectations of how much money you need to make and donate to feel good about your contributions. It’s also a very flexible career path and can build you good career capital along the way.
Sure talent gaps are pressing, but many EA orgs also need more money. We also need more people looking to donate, as the current pool of EA funding feels very over-concentrated in the hands of too few decision-makers right now.
I also wish we didn’t accidentally make donating to AMF or GiveDirectly so uncool. Those orgs could continually absorb the money of everyone in EA and do great, life-saving work.
(Also, not to mention all the career paths that aren’t earning to give or “work in an EA org”...)
So my understanding is as follows.
Imagine that we had these five projects (and only these projects) in the EA portfolio:
-
Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you can’t buy it twice)
-
Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you can’t buy it twice)
-
Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you can’t buy it twice)
-
GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want).
-
Research: Spend $200,000 to create a new opportunity with the same “spend X for Y” of Alpha, Beta, Gamma, or GiveDeltaly.
Early EA (say ~2013), with relatively fewer resources (we didn’t have $100M to spend), would’ve been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Beta’s $500 per unit, GiveDeltaly’s $2000 per unit, or Gamma’s $3333.33 per unit.
But “modern” EA, with lots of money and a shortage of opportunities to spend it on would gladly buy Alpha first but would be more excited by Beta because it allows us to deploy more of our portfolio at a better effectiveness.
(And no one would be excited by Gamma—even though it’s a huge megaproject, it doesn’t beat our baseline of GiveDeltaly.)
~
Now let’s think of things as allocating an EA bank account and use Research. What should we use Research for? Early EA would want us to focus our research efforts on finding another opportunity like Alpha since it is very cost-effective! But modern EA would rather we look for opportunities like Beta—even though it is less effective than Alpha, it can use up 1000x more funds!
Like say we have an EA bank account with $2,000,000,000. If we followed modern EA advice and bought Alpha, bought Beta, bought Research and used it to find another Beta, and bought the second Beta, and then put the remainder into GiveDeltaly, we’d have 1,350,350 units of impact.
But if we followed Early EA advice and bought Alpha, bought Beta, bought Research and used it to find another Alpha, and bought the second Alpha, and then put the remainder into GiveDeltaly, we’d have 1,151,800 units of impact. Lower total impact even though we used research to find a more cost-effective intervention!
This implies the scalability of the projects we identify can matter just as much, if not more than, the cost-effectiveness of the project! I think this scalability mindset is often missed by people who focus mainly on cost-effectiveness and is the main reason IMO to think more about megaprojects.
But this does also imply that scalability isn’t the only thing that matters—no one wants to spend a dollar on Gamma even though it is very scalable.
-
Hi! I listened to your entire video. It was very brave and commendable. I really hope you’ve started something that will help get EA and the Bay Area rationalist scene into a much healthier and more impactful place. I think your analysis of the problem is very sharp. Thank you for coming forward and doing what you did.
I worry this is very overconfident speculation about the very far future. I’m inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.
(To be clear, I don’t know how much I actually agree with this piece, agree with your counterpoint, or how much weight I’d put on other scenarios, or what those scenarios even are.)
Rethink Priorities is pretty close to this! We’ve done message testing now for many orgs across cause areas… Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.
We’d be happy to consider doing more work for other people in EA and the EA movement as a whole!
I think a lot of points in this post are very valid and concerning to me. I hope they will be taken seriously.
My understanding is that this has indeed been an unfortunate vacuum but as of a few months ago plans are now underway to fix this. So I can say that at least some “people who might be able to fund this or otherwise make it happen” are working on it, though I’m not part of these plans, I don’t have much detail, and I won’t claim that the plans will actually work (or that they won’t work—I don’t know).
I do think if anyone else decides to work on this it would be great if they would coordinate. I think it would be bad for us to have multiple non-coordinating media strategies targeted at “effective altruism” specifically.
I’d personally love to see one flagship-level online EAG with the level of resources it was given in 2020, in addition to the multiple in-person conferences. I think a virtual conference is a great supplement that increases the accessibility of the movement and I was really surprised by how much I enjoyed the virtual conferences in 2020.
I don’t think it’s witchhunty at all. The fact is we really have very little knowledge about how Will and Nick are involved with FTX. I really don’t think they did any fraud or condoned any fraud, and I do genuinely feel bad for them, and I want to hope for the best when it comes to their character. I’m pretty substantially unsure if Will/Nick/others made any ex ante mistakes, but they definitely made severe ex post mistakes and lost a lot of trust in the community as a result.
I think this means three things:
1.) I think Nathan is right about the prior. If we’re unsure bout whether they made severe ex ante mistakes, we should remove them. I’d only keep them if I was sure they did not make severe ex ante mistakes. I think this applies more forcefully the more severe the mistake was and the situation with FTX makes me suspect that any mistakes could’ve been about as severe as you would get.
2.) I think in order to be on EVF’s board it’s a mandatory job requirement you to maintain the trust of the community and removing people over this makes sense.
3.) I think a traditional/”normie” board would’ve 100% removed Will and Nick back in November. Though I don’t think that we should always do what a traditional board would do, it strikes me that EA in general is lacking in good governance best practice and would benefit from going in the traditional direction at least on some axes when it comes to better governance (though which axes and how much I’m still unsure).
Obviously speaking very much only for myself here purely personally, definitely not speaking on behalf of Rethink Priorities in any means.