I’m a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
PeterMcCluskey
The original approach was rather erratic about finding high value choices, and was weak at identifying the root causes of the biggest mistakes.
So participants would become more rational about flossing regularly, but rarely noticed that they weren’t accomplishing much when they argued at length with people who were wrong on the internet. The latter often required asking embarrassing questions their motives, and sometimes realizing that they were less virtuous than assumed. People will, by default, tend to keep their attention away from questions like that.
The original approach reflected trends in academia to prioritize attention on behaviors that were most provably irrational, rather than on what caused the most harm. Part of the reason that CFAR hasn’t documented their successes well is they’ve prioritized hard-to-measure changes.
To the best of my knowledge, internal CEAs rarely if ever turn up negative.
Here’s one example of an EA org analyzing the effectiveness of their work, and concluding the impact sucked:
CFAR in 2012 focused on teaching EAs to be fluent in Bayesian reasoning, and more generally to follow the advice from the Sequences. CFAR observed that this had little impact, and after much trial and error abandoned large parts of that curriculum.
This wasn’t a quantitative cost-effectiveness analysis. It was more a subjective impression of “we’re not getting good enough results to save the world, we can do better”. CFAR did do an RCT which showed disappointing results, but I doubt this was CFAR’s main reason for change.
These lessons percolated out to LessWrong blogging, which now focuses less on Bayes theorem and the Sequences, but without calling a lot of attention to the less.
I expect that most EAs who learned about CFAR after about 2014 underestimate the extent to which CFAR’s initial strategies were wrong, and therefore underestimate the evidence that initial approaches to EA work are mistaken.
It seems strange to call populism anti-democratic.
My understanding is that populists usually want more direct voter control over policy. The populist positions on immigration and international trade seem like stereotypical examples of conflicts where populists side with the average voter more than do the technocrats who they oppose.
Please don’t equate anti-democratic with bad. It seems mostly good to have democratic control over the goals of public policy, but let’s aim for less democratic control over factual claims.
I doubt that that study was able to tell whether the dietary changes improved nutrition. They don’t appear to have looked at many nutrients, or figured out which nutrients the subjects were most deficient in. Even if they had quantified all important nutrients in the diet, nutrients in seeds are less bioavailable than nutrients in animal products (and that varies depending on how the seeds are prepared).
There’s lots of somewhat relevant research, but it’s hard to tell which of it is important, and maybe hard for the poor to figure out whether they ought to trust the information that comes from foreigners who claim to be trying to help.
I’ll guess that that more sweet potatoes ought to be high on any list of cheap improvements, and also suggest that small increases in fruit and seafood are usually valuable. But there will be lots of local variation in what’s best.
Could much of the problem be due to the difficulty of starting treatment soon enough after infection?
I see some important promise in this idea, but it looks really hard to convert the broad principles into something that’s both useful, and clear enough that a lawyer could decide whether his employer was obeying it.
10 years worth of cash sounds pretty unusual, at least for an EA charity.
But part of my point is that when stocks are low, the charity won’t have enough of a cushion to do any investing, so it won’t achieve the kind of returns that you’d expect from buying stocks at a no-worse-than-random time. E.g. I’d expect that a charity that tries to buy stocks would have bought around 2000 when the S&P was around 1400, sold some of that in 2003 when the S&P was around 1100 to make up for a shortfall in donations, bought again in 2007 at 1450, then sold again in 2009 at 1100. With patterns like that, it’s easy to get negative returns.
Individual investors often underperform markets for the same reason. They can avoid that by investing only what they’re saving for retirement. However, charities generally shouldn’t have anything equivalent to saving for retirement.
Cash sitting in a charity bank account costs money, so if you have lots of it, invest some;
But the obvious ways to invest (i.e. stocks) work poorly when combined with countercyclical spending. Charities are normally risk-averse about investments because they have plenty of money to invest when stocks are high, but need to draw down reserves when stocks are low.
When I tell people that prisons and immigration should use a similar mechanism, they sometimes give me a look of concern. This concern is based on a misconception
I’ll suggest that some people’s concerns are due to an accurate intuition that your proposal will make it harder to hide the resemblance between prisons and immigration restrictions. Preventing immigration looks to me to be fairly similar to imprisoning them in their current country.
It would be much easier to make a single, more generic policy statement. Something like:
When in doubt, assume that most EAs agree with whatever opinions are popular in London, Berkeley, and San Francisco.
Or maybe:
When in doubt, assume that most EAs agree with the views expressed by the most prestigious academics.
Reaffirming this individually for every controversy would redirect attention (of whatever EAs are involved in the decision) away from core EA priorities.
Another risk is that increased distrust impairs the ability of authorities to do test and trace in low-income neighborhoods, which seem to now be key areas where the pandemic is hardest to control.
EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk
EA has been a niche cause, and changing that seems harder than solving climate change. Increased popularity would be useful, but shouldn’t become a goal in and of itself.
If EAs should focus on climate change, my guess is that it should be a niche area within climate change. Maybe altering the albedo of buildings?
How about having many locations that are open only to people who are running a tracking app?
I’m imagining that places such as restaurants, gyms, and airplanes could require that people use tracking apps in order to enter. Maybe the law should require that as a default for many locations, with the owners able to opt out if they post a conspicuous warning?
How hard would this be to enforce?
Hmm. Maybe you’re right. I guess I was thinking there was an important difference between “constant leverage” and infrequent rebalancing. But I guess that’s a more complicated subject.
See Colby Davis on the problems with leveraged ETFs.
I like this post a good deal.
However, I think you overstate the benefits.
I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.
I’m skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there’s likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history suggests. Returns also depend on recognizing good managers, which tends to be harder than most people expect.
Startups might be good for some people, but it’s generally hard to tell. Are you able to find startups before they apply to Y Combinator? Or do startups only come to you if they’ve been rejected by Y Combinator? Those are likely to have large effects on your expected returns. I’ve invested in about 10 early-stage startups over a period of 20 years, and I still have little idea of what returns to expect from my future startup investments.
I’m skeptical that momentum funds work well. Momentum strategies work if implemented really well, but a fund that tries to automate the strategy via simple rules is likely to lose the benefits to transaction costs and to other traders who anticipate the fund’s trades. Or if it does without simple rules, most investors won’t be able to tell whether it’s a good fund. And if the strategy becomes too popular, that can easily cause returns to become significantly negative (whereas with value strategies, popularity will more likely drive returns to approximately the same as the overall market).
Nearly all of CFAR’s activity is motivated by their effects on people who are likely to impact AI. As a donor, I don’t distinguish much between the various types of workshops.
There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they’re selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn’t have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn’t expect them to try to admit only those with high probabilities of working on AI-related tasks.
- 30 Dec 2019 3:06 UTC; 6 points) 's comment on 2019 AI Alignment Literature Review and Charity Comparison by (
OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.
I attended a 2-day OAK retreat. It was run in a professional manner that suggests they’ll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK’s impact will have less effect on peoples’ behavior than CFAR has.
I considered donating to OAK as an EA charity, but have decided it isn’t quite effective enough for me to treat it that way.
I believe that the person who promoted that grant at SFF has more experience with OAK than I do.
I’m surprised that SFF gave more to OAK than to ALLFED.
With almost all of those proposed intermediate goals, it’s substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.
E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.
It’s clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I’m guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.
A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible—they wouldn’t actually offer the prize without first getting some competent researchers to support it, and they’d likely first try out some smaller prizes in easier domains.
Hanson reports estimates that under our current system, elites have about 16 times as much influence as the median person.
My guess is that under futarchy, the wealthy would have somewhere between 2 and 10 times as much influence on outcomes that are determined via trading.
You seem to disagree with at least one of those estimates. Can you clarify where you disagree?