LLMs are getting much more capable, and progress is rapid. I use them in my daily work,
and there are many tasks where they’re usefully some combination of
faster and more capable than I am. I don’t see signs of these
capability increases stopping or slowing down, and if they do continue
I expect the impact on society to start accelerating as they exceed
what an increasing fraction of humans can do. I think we could see
serious changes in the next
2-5
years.
In my professional life, working on pathogen detection I take this
pretty seriously. Advances in AI make it easier for adversaries to
design and create pathogens, and so it’s important to get a
comprehensive detection system in place quickly. Similarly, more
powerful AIs are likely to speed up our work in some areas
(computational detection) more than others (partnerships) and increase
the value of historical data, and I think about this in my planning at
work.
In other parts of my life, though, I’ve basically been ignoring that I
think this is likely coming. In deciding to get more solar panels
and not get a heat
pump I looked at historical returns and utility prices. I book
dance gigs a year or more out. I save for retirement. I’m raising my
kids in what is essentially preparation for the world of the recent
past.
From one direction this doesn’t make any sense: why wouldn’t I plan
for the future I see coming? But from another it’s more reasonable:
most scenarios where AI becomes extremely capable look either very
good or very bad. Outside of my work, I think my choices don’t
have much impact here: if we all become rich, or dead, my having
saved, spent, invested, or parented more presciently won’t do
much. Instead, in my personal life my decisions have the largest
effects in worlds where AI ends up being not that big a deal, perhaps
only as transformative as the internet has been.
Still, there are probably areas in our personal lives where it’s worth
doing something differently? For example:
Think hard about career choice: if our kids were a bit older
I’d want to be able to give good advice here. How is AI likely to
impact the fields they’re most interested in? How quickly might this
go? What regulatory barriers are there? How might the portions they
especially enjoy change as a fraction of the overall work?
Maybe either hold off on having kids or have them earlier than
otherwise. If we were trying to decide whether to have (another) kid
I’d want to think about how much of wanting to have a kid was due to
very long term effects (seeing them grow into adulthood, increasing
the chance grandchildren, pride in their accomplishments), how I’d
feel if children conceived a few years from now had some (embryo
selection) or a lot of (genome editing) advantages, how financial
constraints might change, what if I never got to be a parent, etc.
Postponing medical treatment that trades short-term discomfort
for long-term improvement: I’m a bit more willing to tolerate and work
around the issues with my wrists and other joints
than I would be in a world where I thought medicine was likely to stay
on its recent trajectory.
Investing money in ways that anticipate this change: I’m
generally a pretty strong efficient markets proponent, but I think
it’s likely that markets are under-responding here outside of the most
direct ways (NVDA) to invest in the boom. But I haven’t actually done
anything here: figuring out which companies I expect to be winners and
losers in ways that are not yet priced in is difficult.
Avoiding investing money in ways that lock it up even if the
ROI is good: I think it’s plausible that our installing solar was a
mistake and keeping the money invested to retain option value would
have been better. I might prefer renting to owning if we didn’t
already own.
What are other places where people should be weighing the potential
impact of near-term transformative AI heavily in their decisions
today? Are there places where most of us should be doing the same
different thing?
Personal AI Planning
LLMs are getting much more capable, and progress is rapid. I use them in my daily work, and there are many tasks where they’re usefully some combination of faster and more capable than I am. I don’t see signs of these capability increases stopping or slowing down, and if they do continue I expect the impact on society to start accelerating as they exceed what an increasing fraction of humans can do. I think we could see serious changes in the next 2-5 years.
In my professional life, working on pathogen detection I take this pretty seriously. Advances in AI make it easier for adversaries to design and create pathogens, and so it’s important to get a comprehensive detection system in place quickly. Similarly, more powerful AIs are likely to speed up our work in some areas (computational detection) more than others (partnerships) and increase the value of historical data, and I think about this in my planning at work.
In other parts of my life, though, I’ve basically been ignoring that I think this is likely coming. In deciding to get more solar panels and not get a heat pump I looked at historical returns and utility prices. I book dance gigs a year or more out. I save for retirement. I’m raising my kids in what is essentially preparation for the world of the recent past.
From one direction this doesn’t make any sense: why wouldn’t I plan for the future I see coming? But from another it’s more reasonable: most scenarios where AI becomes extremely capable look either very good or very bad. Outside of my work, I think my choices don’t have much impact here: if we all become rich, or dead, my having saved, spent, invested, or parented more presciently won’t do much. Instead, in my personal life my decisions have the largest effects in worlds where AI ends up being not that big a deal, perhaps only as transformative as the internet has been.
Still, there are probably areas in our personal lives where it’s worth doing something differently? For example:
Think hard about career choice: if our kids were a bit older I’d want to be able to give good advice here. How is AI likely to impact the fields they’re most interested in? How quickly might this go? What regulatory barriers are there? How might the portions they especially enjoy change as a fraction of the overall work?
Maybe either hold off on having kids or have them earlier than otherwise. If we were trying to decide whether to have (another) kid I’d want to think about how much of wanting to have a kid was due to very long term effects (seeing them grow into adulthood, increasing the chance grandchildren, pride in their accomplishments), how I’d feel if children conceived a few years from now had some (embryo selection) or a lot of (genome editing) advantages, how financial constraints might change, what if I never got to be a parent, etc.
Postponing medical treatment that trades short-term discomfort for long-term improvement: I’m a bit more willing to tolerate and work around the issues with my wrists and other joints than I would be in a world where I thought medicine was likely to stay on its recent trajectory.
Investing money in ways that anticipate this change: I’m generally a pretty strong efficient markets proponent, but I think it’s likely that markets are under-responding here outside of the most direct ways (NVDA) to invest in the boom. But I haven’t actually done anything here: figuring out which companies I expect to be winners and losers in ways that are not yet priced in is difficult.
Avoiding investing money in ways that lock it up even if the ROI is good: I think it’s plausible that our installing solar was a mistake and keeping the money invested to retain option value would have been better. I might prefer renting to owning if we didn’t already own.
What are other places where people should be weighing the potential impact of near-term transformative AI heavily in their decisions today? Are there places where most of us should be doing the same different thing?