I’m a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
PeterMcCluskey
Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.
I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:
These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you’re imagining that the AI would only speed up the job functions that get classified as “science”, whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.
My understanding of Henrich’s model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.
European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn’t be confident that we understand what the most important features are, much less that we can cause LMICs to have them.
Successful societies ought to be risk-averse about this kind of change. If this cause area is worth pursuing, it should focus on the least successful societies. But those are also the societies that are least willing to listen to WEIRD ideas.
Also, the idea that reduced cousin marriage was due to some random church edict seems to be the most suspicious part of Henrich’s book. See The Explanation of Ideology for some claims that the nuclear family was normal in northwest Europe well before Christianity.
Resilience seems to matter for human safety mainly via food supply risks. I’m not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.
It’s harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I’m worried about wild animals, I’m focused more on ensuring that technological change develops so as to keep as many options open as possible.
Why has this depended on NIH? Why aren’t some for-profit companies eager to pursue this?
This seems to nudge people in a generally good direction.
But the emphasis on slack seems somewhat overdone.
My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don’t see slack as a good description of the cause.
One of my earliest memories of Eliezer is him writing something to the effect that he didn’t have time to be a teenager (probably on the Extropians list, but I haven’t found it).
I don’t like the way you classify your approach as an alternative to direct work. I prefer to think of it as a typical way to get into direct work.
I’ve heard a couple of people mention recently that AI safety is constrained by the shortage of mentors for PhD theses. That seems wrong. I hope people don’t treat a PhD as a standard path to direct work.
I also endorse Anna’s related comments here.
This seems mostly right, but it still doesn’t seem like the main reason that we ought to talk about global health.
There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.
There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, there’d be big differences in who can accomplish what kinds of philanthropy.
Introductory EA materials ought to reflect that: instead of one strategy being optimal for everyone who wants to be an EA, the average person ought to focus on easy-to-evaluate philanthropy such as global health. A much smaller fraction of the population with unusual skills ought to focus on existential risks, much as a small fraction of the population ought to focus on founding companies like Moderna and Tesla.
The ESG Alignment Problem
Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?
Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.
I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven’t paid much attention to this area recently.
My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that’s at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.
It’s unclear what would trigger that kind of spending and that kind of collection of experts.
Profit motives aren’t doing much here, due to a combination of the long time to profitability and a low probability that whoever produces the first usable assembler will also produce one that’s good enough for a large market share. I expect that the first usable assembler will be fairly hard to use, and that anyone who can get a copy will use it to produce better versions. That means any company that sells assemblers will have many customers who experiment with ways to compete. It seems
Maybe some of the new crypto or Tesla billionaires will be willing to put up with those risks, or maybe they’ll be deterred by the risks of nanotech causing a catastrophe.
Could a new cold war cause militaries to accelerate development? This seems like a medium-sized reason for concern.
What kind of nanotech safety efforts are needed?
I’m guessing the main need is for better think-tanks to advise politicians on military and political issues. That requires rather different skills than I or most EAs have.
There may be some need for technical knowledge on how to enforce arms control treaties.
There’s some need for more research into grey goo risks. I don’t think much has happened there since the ecophagy paper. Here’s some old discussion about that paper: Hal Finney, Eliezer, me, Hal Finney
- Jun 23, 2022, 2:52 AM; 26 points) 's comment on My thoughts on nanotechnology strategy research as an EA cause area by (
Acting without information on the relative effectiveness of the vaccine candidates was not a feasible strategy for mitigating the pandemic.
I’m pretty sure that with a sufficiently bad virus, it’s safer to vaccinate before effectiveness is known. We ought to plan ahead for how to make such a decision.
This was the fastest vaccine rollout ever
Huh? 40 million doses of the 1957 flu vaccine were delivered within about 6 months of getting a virus sample to the US. Does that not count due to its similarity to existing vaccines?
Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:
Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.
Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.
Inflation creates a wide variety of transaction costs: stores need to change their prices displays more often, consumers need to observe prices more frequently, people use ATMs more frequently, etc.
Inflation transfers wealth from people who stay in one job for a long time, to people who frequently switch jobs.
When inflation is close to zero, these costs are offset by the effects of inflation on unemployment. Those employment effects are only important when wage increases are near zero, whereas the costs of inflation increase in proportion to the inflation rate.
I don’t see high value ways to donate money for this. The history of cryonics suggests that it’s pretty hard to get more people to sign up. Cryonics seems to grow mainly from peer pressure, not research or marketing.
I expect speed limits to hinder the adoption of robocars, without improving any robocar-related safety.
There’s a simple way to make robocars err in the direction of excessive caution: hold the software company responsible for any crash it’s involved in, unless it can prove someone else was unusually reckless. I expect some rule resembling that will be used.
Having speed limits on top of that will cause problems, due to robocars having to drive slower than humans drive in practice (annoying both the passengers and other drivers), when it’s safe for them to sometimes drive faster than humans. I’m unsure how important this effect will be.
Ideally, robocars will be programmed to have more complex rules about maximum speed than current laws are designed to handle.
How much of this will become irrelevant when robocars replace human drivers? I suspect the most important impact of safety rules will be how they affect the timing of that transition. Additional rules might slow that down a bit.
CFTC regulations have been at least as much of an obstacle as gambling laws. It’s not obvious whether the CFTC would allow this strategy.
You’re mostly right. But I have some important caveats.
The Fed acted for several decades as if it was subject to political pressure to reduce inflation. Economists mostly agree that the optimal inflation rate is around 2%. Yet from 2008 to about 2019 the Fed acted as if that were an upper bound, not a target.
But that doesn’t mean that we always need more political pressure for inflation. In the 1960s and 1970s, there was a fair amount of political pressure to increase monetary stimulus by whatever it took to reduce unemployment. That worked well when inflation was creeping up around 2 or 3%, but as it got higher it reduced economic stability without doing much for unemployment. So I don’t want EAs to support unconditional increases in inflation. To the extent that we can do something valuable, it should be to focus more attention on achieving a goal such as 2% inflation or 4% NGDP growth.
I don’t see signs that the pressure to keep inflation below 2% came from the rich. Rich people and companies mostly know how to do well in an inflationary environment. The pressure seems to be coming from fairly average voters who are focused on the prices of gas and meat, and from people who live on fixed pensions.
Economic theory doesn’t lend much support to the idea that it’s risky to have unusually large increases in the money supply. Most of the concern seems to come from people who assume the velocity of money is pretty stable. That assumption has often worked okay, but has been pretty far off in 2008 and 2020.
It’s not clear why there would be much risk, as long as the Fed adjusts the money supply to maintain an inflation or NGDP target. You’re correct to worry that the inflation of 2021 provides some reasons for concern about whether the Fed will do that. My impression is that the main problem was that the Fed committed in 2020 to a particular path of interest rates over the next few years, when its commitments ought to be focused on a target such as inflation or NGDP. This is an area where economists still have some important disagreements.
It’s pretty clear that both unusually high and unusually low inflation cause important damage. Yet too many people worry about only one of these risks.
For more on this subject, read Sumner’s book The Money Illusion (which I reviewed here).
It’s risky to connect AI safety to one side of an ideological conflict.