The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:
-my median estimate year moved back significantly, cut down by more than half
-my probability mass on AGI significantly sooner than even that bulked up
The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.
The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).
There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.
In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.
I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.
The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:
-my median estimate year moved back significantly, cut down by more than half
-my probability mass on AGI significantly sooner than even that bulked up
The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.
The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).
There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.
In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.
I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.