I’m a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
PeterMcCluskey
Nanotech progress has been a good deal slower than was expected by people who were scared of it.
I participated in XPT, and have a post on LessWrong about it.
I have alexithymia.
Greater awareness seems desirable. But I doubt it “severely affects” 1 in 10 people. My impression is that when it’s correlated with severe problems, the problems are mostly caused by something like trauma, and alexithymia is more a symptom than a cause of the severe problems.
It’s not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.
6 months sounds like a guess as to how long the leading companies might be willing to comply.
The timing of the letter could be a function of when they were able to get a few big names to sign.
I don’t think they got enough big names to have much effect. I hope to see a better version of this letter before too long.
Something important seems missing from this approach.
I see many hints that much of this loneliness results from trade-offs made by modern Western culture, neglecting (or repressing) tightly-knit local community ties to achieve other valuable goals.
My sources for these hints are these books:
One point from WEIRDest People is summarized here:
Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5% of the world’s societies follow this pattern, it is popular and common in urban North America today largely because it suits the cultural emphasis on independence.
Can Western culture give lower priority to independence while retaining most of the benefits of WEIRD culture?
Should we expect to do much about loneliness without something along those lines?
AI seems likely to have some impact on loneliness. Can we predict and speed up the good impacts?
Most Westerners underestimate the importance of avoiding loneliness. But I’m confused as to how we should do something about that.
I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.
See Robin Hanson’s comments.
I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there’s still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it’s just hard to find descriptions of their analysis.
I agree very much with your guess that SBF’s main mistake was pride.
I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn’t mean that I knew how to handle all market conditions.
I still feel some urges to tell the market that it’s wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.
I haven’t expected EAs to have any unusual skill at spotting risks.
EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn’t look much like the risk of human extinction.
I agree that there’s a lot of hindsight bias here, but I don’t think that tweet tells us much.
My question for Dony is: what questions could we have asked FTX that would have helped? I’m pretty sure I wouldn’t have detected any problems by grilling FTX. Maybe I’d have gotten some suspicions by grilling people who’d previously worked with SBF, but I can’t think of what would have prompted me to do that.
Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I’m unclear whether it’s valuable to distinguish between them.
It’s risky to connect AI safety to one side of an ideological conflict.
Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.
I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:
These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you’re imagining that the AI would only speed up the job functions that get classified as “science”, whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.
My understanding of Henrich’s model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.
European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn’t be confident that we understand what the most important features are, much less that we can cause LMICs to have them.
Successful societies ought to be risk-averse about this kind of change. If this cause area is worth pursuing, it should focus on the least successful societies. But those are also the societies that are least willing to listen to WEIRD ideas.
Also, the idea that reduced cousin marriage was due to some random church edict seems to be the most suspicious part of Henrich’s book. See The Explanation of Ideology for some claims that the nuclear family was normal in northwest Europe well before Christianity.
Resilience seems to matter for human safety mainly via food supply risks. I’m not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.
It’s harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I’m worried about wild animals, I’m focused more on ensuring that technological change develops so as to keep as many options open as possible.
Why has this depended on NIH? Why aren’t some for-profit companies eager to pursue this?
This seems to nudge people in a generally good direction.
But the emphasis on slack seems somewhat overdone.
My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don’t see slack as a good description of the cause.
One of my earliest memories of Eliezer is him writing something to the effect that he didn’t have time to be a teenager (probably on the Extropians list, but I haven’t found it).
I don’t like the way you classify your approach as an alternative to direct work. I prefer to think of it as a typical way to get into direct work.
I’ve heard a couple of people mention recently that AI safety is constrained by the shortage of mentors for PhD theses. That seems wrong. I hope people don’t treat a PhD as a standard path to direct work.
I also endorse Anna’s related comments here.
We shouldn’t be focused too heavily on what is politically feasible this year. A fair amount of our attention should be on what to prepare in order to handle a scenario in which there’s more of an expert consensus a couple of years from now.