Just a bundle of subroutines sans free will, aka flawed monkey #~100,000,000,000.
Will Aldred
8: Intergalactic spreading of intelligent life and sharpening the Fermi paradox (Armstrong & Sandberg, 2012)
2: The aestivation hypothesis for resolving Fermi’s paradox (Sandberg, Armstrong & Cirkovic, 2017)
9: The Future of Human Evolution (Bostrom, 2004)
3: The Edges of Our Universe (Ord, 2021)
8: Meditations on Moloch (Alexander, 2014)
10: What is a Singleton? (Bostrom, 2005)
Nuclear risk research ideas: Summary & introduction
See also ‘Research Debt’ (Olah & Carter, 2017)
^^(especially my Berkeley peeps—Go Bears!)
(context: I did my undergrad at Berkeley, and remain subscribed to the tribalism of college sports teams; Lightcone is also kinda neat)
^^In case my tone above is unclear, I love y’all across the pond really <3
Where are the rest of the upvotes for this post?
My guess is that the wordplay here flew over the heads of our US friends, for whom the franchise is not Where’s Wally, but Where’s Waldo.
And while we’re on this topic, here’s a (non-exhaustive) list of some other silly-sounding things Americans say.
Miscellaneous & Meta X-Risk Overview: CERI Summer Research Fellowship
Many thanks for this comment, especially the part below, which I embarrasingly overlooked (I did know about this database and the nuclear view—I literally showed it to someone the other day #facepalm) and which I’ve now incorporated into the main text of my post
Regarding “Who else is working on the problem?”, people might also find useful the “nuclear risk” “view” of my Database of orgs relevant to longtermist/x-risk work
“To give examples of our target audience: [...] 3. Aspiring generalist researchers at any stage in their career.”
I agree that writing up forecasting reasoning is one way for aspiring generalist researchers to build generalist-type research skill, but also want to highlight some other options:
Summarize/Collect previous posts/articles/papers (I think this is the probably the best skill-building activity for an aspiring generalist researcher)
Read, then write book reviews (see posts tagged under ‘books,’ and also suggestions from Michael Aird and from Buck Shlegeris; also related is Holden Karnofsky’s ‘Reading books vs. engaging with them’)
Build inside views (see Holden Karnofsky’s ‘Learning by writing’ and Neel Nanda’s ‘How I formed my own views about AI safety’
From Linch Zhang’s shortform: “Deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.”
Apply for jobs/internships/research training programs (and view the process of writing written responses in your applications as skill-building)
Possibly other things suggested in Aird’s ‘Notes on EA-related research, writing, testing fit, learning, and the Forum’
Nuclear Risk Overview: CERI Summer Research Fellowship
if interested, here’s some further evidence that it’s just really hard to map: Learning from connectomics on the fly—ScienceDirect
Reading the “It’s rarely worth your time to give detailed feedback” section helped me update—I appreciate how clearly written it is.
I share the feeling of sadness that Giving Feedback Is Probably Not Worth It, but in the spirit of not flinching away from the truth I think this is an important thing for grantmakers/aspiring grantmakers (as well as grant applicants) to understand.