I’m a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.
PeterMcCluskey
I’ve been buying Alexandre’s eggs. Should I switch to the Berkeley Bowl brand pasture-raised eggs? Do you have any other recommendations for eggs?
Split and Commit seems like the standard post on this topic.
I want to emphasize that this just sets a lower bound on the importance.
E.g. there’s a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can’t tell whether it’s due in part to a fungal infection. He’s got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He’s trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that’s not very tractable yet.
the typical time from vaccine development was decades and the fastest ever time was 10 years.
Huh? It was about 6 months for the 1957 pandemic.
We shouldn’t be focused too heavily on what is politically feasible this year. A fair amount of our attention should be on what to prepare in order to handle a scenario in which there’s more of an expert consensus a couple of years from now.
Nanotech progress has been a good deal slower than was expected by people who were scared of it.
I participated in XPT, and have a post on LessWrong about it.
I have alexithymia.
Greater awareness seems desirable. But I doubt it “severely affects” 1 in 10 people. My impression is that when it’s correlated with severe problems, the problems are mostly caused by something like trauma, and alexithymia is more a symptom than a cause of the severe problems.
It’s not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.
6 months sounds like a guess as to how long the leading companies might be willing to comply.
The timing of the letter could be a function of when they were able to get a few big names to sign.
I don’t think they got enough big names to have much effect. I hope to see a better version of this letter before too long.
Something important seems missing from this approach.
I see many hints that much of this loneliness results from trade-offs made by modern Western culture, neglecting (or repressing) tightly-knit local community ties to achieve other valuable goals.
My sources for these hints are these books:
One point from WEIRDest People is summarized here:
Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5% of the world’s societies follow this pattern, it is popular and common in urban North America today largely because it suits the cultural emphasis on independence.
Can Western culture give lower priority to independence while retaining most of the benefits of WEIRD culture?
Should we expect to do much about loneliness without something along those lines?
AI seems likely to have some impact on loneliness. Can we predict and speed up the good impacts?
Most Westerners underestimate the importance of avoiding loneliness. But I’m confused as to how we should do something about that.
I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.
See Robin Hanson’s comments.
I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there’s still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it’s just hard to find descriptions of their analysis.
I agree very much with your guess that SBF’s main mistake was pride.
I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn’t mean that I knew how to handle all market conditions.
I still feel some urges to tell the market that it’s wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.
I haven’t expected EAs to have any unusual skill at spotting risks.
EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn’t look much like the risk of human extinction.
I agree that there’s a lot of hindsight bias here, but I don’t think that tweet tells us much.
My question for Dony is: what questions could we have asked FTX that would have helped? I’m pretty sure I wouldn’t have detected any problems by grilling FTX. Maybe I’d have gotten some suspicions by grilling people who’d previously worked with SBF, but I can’t think of what would have prompted me to do that.
Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I’m unclear whether it’s valuable to distinguish between them.
It’s risky to connect AI safety to one side of an ideological conflict.
Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.
There are many ways to slow AI development, but I’m concerned that it’s misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years.
A strictly enforced compute threshold seems like it would slow AI development by something like 2x or 4x. AI capability progress would continue via distributed training, and by increasing implementation efficiency.
Slowing AI development is likely good if the rules can be enforced well enough. My biggest concern is that laws will be carelessly written, with a result that most responsible AI labs obey their spirit, but that the least responsible lab will find loopholes to exploit.
That means proposals should focus carefully on trying to imagine ways that AI labs could evade compliance with the regulations.