I’m a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
PeterMcCluskey
The Future of Earning to Give
Speaking for why I haven’t donated, this is close to the key question:
>Then the question is (roughly) whether, given £60,000, it makes more sense to fund 1 researcher who’s cleared the EA hiring bar, or 10 who haven’t (and are in D).
My intuition has been that if those 10 are chosen at random, then I’m moderately confident that it’s better to fund the 1 well-vetted researcher.
EA is talent-constrained in the sense that it needs more people like Nick Bostrom or Eric Drexler, but much less in the sense of needing more people who are average EAs to do direct EA work.
I’ve done some angel investing in startups. I initially took an approach of trying to fund anyone who has a a good idea. But that worked poorly, and I’ve shifted, as good VCs advise, to looking for signs of unusual competence in founders. (Alas, I still don’t have much reason to think I’m good at angel investing). And evaluating founder’s competence feels harder than evaluating a business idea, so I’m not willing to do it very often.
I use a similar approach with donating to early-stage charities, expecting to see many teams with decent ideas, but expecting the top 5% to be more than 10 times as valuable than the average. And I’m reluctant to evaluate more pre-track-record projects than I’m already doing.
With the hotel, I see a bunch of little hints that it’s not worth my time to attempt an in-depth evaluation of the hotel’s leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.
I can imagine that the hotel attracts better than random EAs, but it’s also easy to imagine that it selects mainly for people who aren’t good enough to belong at a top EA organization.
Halffull has produced a better argument for the EA Hotel, but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.
Also, if donors fund any charity that has a good idea, I’m a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.- My Q1 2019 EA Hotel donation by 1 Apr 2019 2:23 UTC; 120 points) (
- 3 Apr 2019 17:04 UTC; 10 points) 's comment on $100 Prize to Best Argument Against Donating to the EA Hotel by (
War is more likely when the population has a higher fraction of young men (e.g. see Angry Young Men Are Making the World Less Stable ). That’s doesn’t quite say that young men vote more for war, but it’s suggestive.
More war could easily overwhelm any benefits from weighted voting.
It seems strange to call populism anti-democratic.
My understanding is that populists usually want more direct voter control over policy. The populist positions on immigration and international trade seem like stereotypical examples of conflicts where populists side with the average voter more than do the technocrats who they oppose.
Please don’t equate anti-democratic with bad. It seems mostly good to have democratic control over the goals of public policy, but let’s aim for less democratic control over factual claims.
The ESG Alignment Problem
To the best of my knowledge, internal CEAs rarely if ever turn up negative.
Here’s one example of an EA org analyzing the effectiveness of their work, and concluding the impact sucked:
CFAR in 2012 focused on teaching EAs to be fluent in Bayesian reasoning, and more generally to follow the advice from the Sequences. CFAR observed that this had little impact, and after much trial and error abandoned large parts of that curriculum.
This wasn’t a quantitative cost-effectiveness analysis. It was more a subjective impression of “we’re not getting good enough results to save the world, we can do better”. CFAR did do an RCT which showed disappointing results, but I doubt this was CFAR’s main reason for change.
These lessons percolated out to LessWrong blogging, which now focuses less on Bayes theorem and the Sequences, but without calling a lot of attention to the less.
I expect that most EAs who learned about CFAR after about 2014 underestimate the extent to which CFAR’s initial strategies were wrong, and therefore underestimate the evidence that initial approaches to EA work are mistaken.
I’m unimpressed by the arguments for random funding of research proposals. The problems with research funding are mostly due to poor incentives, rather than people being unable to do much better than random guessing. EA organizations don’t have ideal incentives, and may be on the path to unreasonable risk-aversion, but they still have a fairly sophisticated set of donors setting their incentives, and don’t yet appear to be particularly risk-averse or credential-oriented.
Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don’t get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I’m willing or able to evaluate, but it’s not obvious that they’re being less selective than I am about which ones they fund.
I mentioned Nick Bostrom and Eric Drexler because they’re widely recognized as competent. I didn’t mean to imply that we should focus more funding on people who are that well known—they do not seem to be funding constrained now.
Let me add some examples of funding I’ve done that better characterize what I’m aiming for in charitable donations (at the cost of being harder for many people to evaluate):
My largest donations so far have been to CFAR, starting in early 2013, when their track record was rather weak, and almost unknown outside of people who had attended their workshops. That was based largely on impressions of Anna Salamon that I got by interacting with her (for reasons that were only marginally related to EA goals).
Another example is Aubrey de Grey. I donated to the Methuselah Mouse Prize for several years starting in 2003, when Aubrey had approximately no relevant credentials beyond having given a good speech at the Foresight Institute and a similar paper on his little-known website.
Also, I respected Nick Bostrom and Eric Drexler fairly early in their careers. Not enough to donate to their charitable organizations at their very beginning (I wasn’t actively looking for effective charities before I heard of GiveWell). But enough that I bought and read their first books, primarily because I expected them to be thoughtful writers.
I agree that there’s a lot of hindsight bias here, but I don’t think that tweet tells us much.
My question for Dony is: what questions could we have asked FTX that would have helped? I’m pretty sure I wouldn’t have detected any problems by grilling FTX. Maybe I’d have gotten some suspicions by grilling people who’d previously worked with SBF, but I can’t think of what would have prompted me to do that.
You’re mostly right. But I have some important caveats.
The Fed acted for several decades as if it was subject to political pressure to reduce inflation. Economists mostly agree that the optimal inflation rate is around 2%. Yet from 2008 to about 2019 the Fed acted as if that were an upper bound, not a target.
But that doesn’t mean that we always need more political pressure for inflation. In the 1960s and 1970s, there was a fair amount of political pressure to increase monetary stimulus by whatever it took to reduce unemployment. That worked well when inflation was creeping up around 2 or 3%, but as it got higher it reduced economic stability without doing much for unemployment. So I don’t want EAs to support unconditional increases in inflation. To the extent that we can do something valuable, it should be to focus more attention on achieving a goal such as 2% inflation or 4% NGDP growth.
I don’t see signs that the pressure to keep inflation below 2% came from the rich. Rich people and companies mostly know how to do well in an inflationary environment. The pressure seems to be coming from fairly average voters who are focused on the prices of gas and meat, and from people who live on fixed pensions.
Economic theory doesn’t lend much support to the idea that it’s risky to have unusually large increases in the money supply. Most of the concern seems to come from people who assume the velocity of money is pretty stable. That assumption has often worked okay, but has been pretty far off in 2008 and 2020.
It’s not clear why there would be much risk, as long as the Fed adjusts the money supply to maintain an inflation or NGDP target. You’re correct to worry that the inflation of 2021 provides some reasons for concern about whether the Fed will do that. My impression is that the main problem was that the Fed committed in 2020 to a particular path of interest rates over the next few years, when its commitments ought to be focused on a target such as inflation or NGDP. This is an area where economists still have some important disagreements.
It’s pretty clear that both unusually high and unusually low inflation cause important damage. Yet too many people worry about only one of these risks.
For more on this subject, read Sumner’s book The Money Illusion (which I reviewed here).
OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.
I attended a 2-day OAK retreat. It was run in a professional manner that suggests they’ll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK’s impact will have less effect on peoples’ behavior than CFAR has.
I considered donating to OAK as an EA charity, but have decided it isn’t quite effective enough for me to treat it that way.
I believe that the person who promoted that grant at SFF has more experience with OAK than I do.
I’m surprised that SFF gave more to OAK than to ALLFED.
I suspect that principal–agent problems are the biggest single obstacle to alignment. That leads me to suspect it’s less tractable than you indicate.
I’m interested in what happened with Netflix. Ten years ago their recommendation system seemed focused almost exclusively on maximizing user ratings of movies. That dramatically improved my ability to find good movies.
Yet I didn’t notice many people paying attention to those benefits. Netflix has since then shifted toward less aligned metrics. I’m less satisfied with Netflix now, but I’m unclear what other users think of the changes.
I agree with most of your comment.
>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.
That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they’re pretty replaceable.
Hanson reports estimates that under our current system, elites have about 16 times as much influence as the median person.
My guess is that under futarchy, the wealthy would have somewhere between 2 and 10 times as much influence on outcomes that are determined via trading.
You seem to disagree with at least one of those estimates. Can you clarify where you disagree?
>For anyone who’s had some experience with depression or anxiety, as well as with “some problems walking about,” it should be obvious that moderate depression or anxiety are (much) worse than moderate mobility problems, pound for pound.
That’s obvious for rich people, but not at all obvious for someone who risks hunger as a result of mobility problems.
I assume that by “cash-flow positive”, you mean supported by fees from workshop participants?
I don’t consider that to be a desirable goal for CFAR.
Habryka’s analysis focuses on CFAR’s track record. But CFAR’s expected value comes mainly from possible results that aren’t measured by that track record.
My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren’t interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.
It’s risky to connect AI safety to one side of an ideological conflict.
Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?
Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.
I’ve contributed small amounts of money to MAPS , but I haven’t been thinking of those as EA donations.
My doubts overlap a fair amount with those of Scott Alexander , but I’ll focus on somewhat different reasoning which led me there.
It sounds like MAPS has been getting impressive results, and MAPS would likely qualify as an EA charity if FDA approval were the main obstacle to extending those results to the typical person who seeks help with PTSD. However, I suspect there are other important obstacles.
I know a couple of people, who I think consider themselves EAs, who have been trying to promote an NLP-based approach to treating PTSD, which reportedly has a higher success rate than MAPS has reported. The basic idea behind it has been around for years , without spreading very widely, and without much interest from mainstream science.
Maybe the reports I hear involve an improved version of the basic technique, and it will take off as soon as the studies based on the new version are published.
Or maybe the glowing reports are based on studies that attracted both therapists and patients who were unusually well suited for NLP, and don’t generalize to random therapists and random PTSD patients. And maybe the MAPS study has similar problems.
Whatever the case is there, the ease with which I was able to stumble across an alternative to psychedelics that sounds about equally promising is some sort of evidence against the hypothesis that there’s a shortage of promising techniques to treat PTSD.
I suspect there are important institutional problems in getting mental help professionals to adopt techniques that provide quick fixes. I doubt it’s a complete coincidence that the number of visits required for for successful therapy happens to resemble a number that maximizes revenue per patient.
If that were simply a conspiracy of medical professionals, and patients were eager to work around them, I’d be vaguely hopeful of finding a way to do so. But I’m under the impression that patients have a weak tendency to contribute to the problem, by being more likely to recommend to their friends a therapist who they see for long time, than they would be to recommend a therapist who they stop seeing after a month because they were cured that fast. And I don’t see lots of demand for alternative routes to finding therapists that have good track records.
None of these reasons for doubt is quite sufficient by itself to decide that MAPS isn’t an EA charity, but they outline at least half of my intuitions for feeling somewhat pessimistic about this cause area.
I agree very much with your guess that SBF’s main mistake was pride.
I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn’t mean that I knew how to handle all market conditions.
I still feel some urges to tell the market that it’s wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.