>For anyone who’s had some experience with depression or anxiety, as well as with “some problems walking about,” it should be obvious that moderate depression or anxiety are (much) worse than moderate mobility problems, pound for pound.
That’s obvious for rich people, but not at all obvious for someone who risks hunger as a result of mobility problems.
I assume that by “cash-flow positive”, you mean supported by fees from workshop participants?I don’t consider that to be a desirable goal for CFAR.Habryka’s analysis focuses on CFAR’s track record. But CFAR’s expected value comes mainly from possible results that aren’t measured by that track record.My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren’t interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.
>which means that what everyone else is doing doesn’t matter all that muchEarning to give still matters a moderate amount. That’s mostly what I’m doing. I’m saying that average EA should start with the outside view that they can’t do better than earning to give, and then attempt some more difficult analysis to figure out how they compare to average.And it’s presumably possible to matter more than the average earning to give EA, by devoting above-average thought to vetting new charities.
I’m unimpressed by the arguments for random funding of research proposals. The problems with research funding are mostly due to poor incentives, rather than people being unable to do much better than random guessing. EA organizations don’t have ideal incentives, and may be on the path to unreasonable risk-aversion, but they still have a fairly sophisticated set of donors setting their incentives, and don’t yet appear to be particularly risk-averse or credential-oriented.Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don’t get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I’m willing or able to evaluate, but it’s not obvious that they’re being less selective than I am about which ones they fund.I mentioned Nick Bostrom and Eric Drexler because they’re widely recognized as competent. I didn’t mean to imply that we should focus more funding on people who are that well known—they do not seem to be funding constrained now.Let me add some examples of funding I’ve done that better characterize what I’m aiming for in charitable donations (at the cost of being harder for many people to evaluate):My largest donations so far have been to CFAR, starting in early 2013, when their track record was rather weak, and almost unknown outside of people who had attended their workshops. That was based largely on impressions of Anna Salamon that I got by interacting with her (for reasons that were only marginally related to EA goals).Another example is Aubrey de Grey. I donated to the Methuselah Mouse Prize for several years starting in 2003, when Aubrey had approximately no relevant credentials beyond having given a good speech at the Foresight Institute and a similar paper on his little-known website.Also, I respected Nick Bostrom and Eric Drexler fairly early in their careers. Not enough to donate to their charitable organizations at their very beginning (I wasn’t actively looking for effective charities before I heard of GiveWell). But enough that I bought and read their first books, primarily because I expected them to be thoughtful writers.
Speaking for why I haven’t donated, this is close to the key question:>Then the question is (roughly) whether, given £60,000, it makes more sense to fund 1 researcher who’s cleared the EA hiring bar, or 10 who haven’t (and are in D).My intuition has been that if those 10 are chosen at random, then I’m moderately confident that it’s better to fund the 1 well-vetted researcher.EA is talent-constrained in the sense that it needs more people like Nick Bostrom or Eric Drexler, but much less in the sense of needing more people who are average EAs to do direct EA work.I’ve done some angel investing in startups. I initially took an approach of trying to fund anyone who has a a good idea. But that worked poorly, and I’ve shifted, as good VCs advise, to looking for signs of unusual competence in founders. (Alas, I still don’t have much reason to think I’m good at angel investing). And evaluating founder’s competence feels harder than evaluating a business idea, so I’m not willing to do it very often.I use a similar approach with donating to early-stage charities, expecting to see many teams with decent ideas, but expecting the top 5% to be more than 10 times as valuable than the average. And I’m reluctant to evaluate more pre-track-record projects than I’m already doing.With the hotel, I see a bunch of little hints that it’s not worth my time to attempt an in-depth evaluation of the hotel’s leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.I can imagine that the hotel attracts better than random EAs, but it’s also easy to imagine that it selects mainly for people who aren’t good enough to belong at a top EA organization.Halffull has produced a better argument for the EA Hotel, but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.Also, if donors fund any charity that has a good idea, I’m a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.
Here are a few examples of strategies that look (or looked) equally plausible, from the usually thoughtful blog of my fellow LessWronger Colby Davis .This blog post recommends:- emerging markets, which overlaps a fair amount with my advice- put-writing, which sounds reasonable to me, but he managed to pick a bad time to advocate it- preferred stock, which looks appropriate today for more risk-averse investors, but which looked overpriced when I wrote my post.This post describes one of his failures. Buying XIV was almost a great idea. It was a lot like shorting VXX, and shorting VXX is in fact a good idea for experts who are cautious enough not to short too much (alas, the right amount of caution is harder to know than most people expect). I expect the rewards in this area to go only to those who accept hard-to-evaluate risks.This post has some strategies that require more frequent trading. I suspect they’re good, but I haven’t given them enough thought to be confident.
Hi, I’m Bayesian Investor.I doubt that following my advice would be riskier than the S&P 500 - the low volatility funds reduce the risk in important ways (mainly by moving less in bear markets) that roughly offset the features which increase risk.It’s rational for most people to ignore my advice, because there’s lots of other (somewhat conflicting) advice out there that sounds equally plausible to most people.I’ve got lots of evidence about my abilities (I started investing as a hobby in 1980, and it’s been my main source of income for 20 years). But I don’t have an easy way to provide much evidence of my abilities in a single blog post.
I’m a little confused by this reply. Did you think I was complaining that you over-estimated the costs of weight loss? Let me emphasize that I was complaining about the actual resources devoted to weight loss, not your estimates of it. I’ll guess that you under-estimated those costs, by focusing on money spent, rather than trying to evaluate the psychological costs.
My main point is that we should focus more on getting people to switch from typical weight loss approaches to ones that are easier and more effective.
I’m unsure what to infer from your weight satisfaction evidence. It might mean that some people notice that obesity is harming them (via sleep apnea? romantic problems?) and that’s what causes them to worry. Or it might mean they’re just more responsive to peer pressure, and it’s the peer pressure, not the obesity, that’s harmful.
I suspect you underestimate the cost of obesity.
But there’s something seriously wrong with the cost of the typical weight loss approach, and your ROI estimate might be close to the right answer for that.
I believe it’s possible to adopt a much better than average approach to weight loss, by focusing more on switching to healthier foods (based on the Satiety Index, or on high fiber content), and/or some form of intermittent fasting.
I expect that good software engineers are more likely to figure out for themselves how to be more efficient than they are to figure out how to increase their work quality. So it’s not obvious what to infer from “it’s harder for an employer to train people to work faster”—does it just mean that the employer has less need to train the slow, high quality worker?
Regulations shouldn’t be much of a problem for subsidized prediction markets. The regulations are designed to protect people from losing their investments. You can avoid that by not taking investments—i.e. give every trader a free account. Just make sure any one trader can’t create many accounts.
Alas, it’s quite hard to predict how much it will cost to generate good predictions, regardless of what approach you take.
Drexler would disagree with some of Richard’s phrasing, but he seems to agree that most (possibly all) of (somewhat modified versions of) those 6 reasons should cause us to be somewhat worried. In particular, he’s pretty clear that powerful utility maximisers are possible and would be dangerous.
I think it’s more appropriate to use Bostrom’s Moral Parliament to deal with conflicting moral theories.Your approach might be right if the theories you’re comparing used the same concept of utility, and merely disagreed about what people would experience.But I expect that the concept of utility which best matches human interests will say that “infinite utility” doesn’t make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.Similarly, I use a dealist approach to morality. If you show me an argument that there’s an objective morality which requires me to increase the probability of infinite utility, I’ll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom’s parliament than like your approach.
>For all actions have a non-zero chance of resulting in infinite positive utility.
Human utility functions seem clearly inconsistent with infinite utility. See Alex Mennen’s Against the Linear Utility Hypothesis and the Leverage Penalty for arguments.
I don’t identify 100% with future versions of myself, and I’m somewhat selfish, so I discount experiences that will happen in the distant future. I don’t expect any set of possible experiences to add up to something I’d evaluate as infinite utility.
I disagree with your analysis of “are we that ignorant?“.
For things like nuclear war or financial meltdown, we’ve got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I’m guessing it will take something like $1 billion in focused funding).
With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.
We can see from Moore’s law and from ML progress that we’re on track for something at least as unusual as the industrial revolution.
The stock and bond markets do provide some evidence of predictability, but I’m unsure how good they are at evaluating events that happen much less than once per century.
I’m a little unclear on what you are asking.
How strictly do you mean when you say “provably safe”? That seems like an area where all AI safety researchers are hesitant to say how high they’re aiming.
And by “have it implemented”, do you mean fully develop it own their own, or do you include scenarios where they convey keys insights to Google, and thereby cause Google to do something safer?
I don’t trust the author (Lomborg), based on the exaggerations I found in his book Cool It.
I reviewed that book here.
I suggest starting with MAPS.
I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.
Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.
There are multiple ways to produce good incentives, and for internal office predictions, there’s usually something simpler than prediction markets that works well enough.
I object to the idea that early stage Alzheimer’s is incurable. See the book The End of Alzheimer’s.