Thanks Lizka!
Some misc personal reflections:
Working at Forethought has been my favourite job ever, by a decent margin
I spent a couple of years doing AI governance research independently/collaborating with others in an ad hoc way before joining Forethought. I think the quality of my work has been way higher since joining (because I’ve been working on more important questions than I was able to make headway on solo), and it’s also been just a huge win in terms of productivity and attention (the costs of tracking my time, hustling for new projects, managing competing projects etc were pretty huge for me and made it really hard to do proper thinking)
One minor addition from me on why/not to work at Forethought: I think the people working at Forethought care pretty seriously about things going well, and are really trying to make a contribution.
I think this is both a really special strength, and something that has pitfalls:
It’s a privilege to work with people who care in this way, and it cuts a lot of the crap that you’d get in organisations that were more oriented towards short term outcomes, status, etc
On the other hand, I sometimes worry about Forethought leaning a bit to heavily on EA-style ‘do what’s most impactful’ vibes. I think this can kill curiosity, and also easily degrades into trying to try/people trying to meet their own psychological needs to make an impact instead of really staring in the face the reality we seem to be living in.
Other people at Forethought think that we’re not leaning into this enough though: most work on AI futures stuff is low quality and won’t matter at all, and it’s very easy to fill all your time with interesting and pointless stuff. I agree on those failure modes, but disagree about where the right place on the spectrum is.
And then a few notes on the sorts of people I’d be really excited to have apply:
People who are thinking for themselves and building their own models of what’s going on. I think this is rare and sorely needed. Some particular sub-groups I want to call out:
Really smart independent thinkers who want to work on AI macrostrategy stuff but haven’t yet had a lot of surface area with the topic or done a lot of research. I think Forethought could be a great place for someone to soak up a lot of the existing thinking on these topics, en route to developing their own agenda.
Researchers with deep world models on the AI stuff, who think that Forethought is kind of wrong/a lot less good than it could be. The high-level aspiration for Forethought is something like, get the world to sensibly navigate the transition to superintelligence. We are currently 6 researchers, with fairly correlated views: of course we are totally failing to achieve this aspiration right now. But it’s a good aspiration, and to the extent that someone has views on how to better address it, I’d love for them to apply.
If I got to choose one type of researcher to hire, it would be this one.
My hope would be that for many people in this category, Forethought would be able to ‘get out of the way’: give the person free reign, not entangle them in organisational stuff where they don’t want that, and engage with them intellectually to the extent that it’s mutually productive.
I agree with Lizka that people who think Forethought sucks probably won’t want to apply/get hired/enjoy working at Forethought.
People who are working on this stuff already, but hamstrung by not having [a salary/colleagues/an institutional home/enough freedom for research at their current place of work/a manager to support them/etc]. I’d hope that Forethought could be a big win for people in this position, and allow them to unlock a bunch more of their potential.
Thanks for your post AJ, and esp this comment which I found clarifying.
I’ve only skimmed your post, and haven’t read what me and Owen wrote in several years, but my quick take is:
We’re saying ‘within a particular longtermist frame, it’s notable that it’s still rational to allocate resources to neartermist ends, for instrumental reasons’
I think you agree with this
Since writing that essay, I’m now more worried about AI making humans instrumentally obsolete, in a way that would weaken this dynamic a lot (I’m thinking of stuff like the intelligence curse). So I don’t actually feel confident this is true any more.
I think you are saying ‘but that is not a good frame, and in fact normatively we should care about some of those things intrinsically’
I agree, at least partially. I don’t think we intended to endorse that particular longtermist frame—just wanted to make the argument that even if you have it, you should still care about neartermist stuff. (And actually, caring instrinsically about neartermist stuff is part of what motivated making the argument, iirc.)
I vibed with some of your writing on this, e.g. “The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.”
I’m not a straight out yes—I think Wednesday in a million years might matter much more than this Tuesday morning, and am pretty convinced of some aspects of longtermism. But I agree with you in putting intrinsic value on the present moment and people’s experiences in it
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.