@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We’d love to hear topics that you’d like us to discuss. (The questions people proposed last time are here, for reference.) We’re most likely to discuss issues related to AI, but a broad set of topics other than “preventing AI takeover” are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.
I’d be interested in seeing you guys elaborate on the comments you make here in response to Rob’s question that some control methods, such as AI boxing, may be “a bit of a dick move”.
Much of the stuff that catches your interest on the 80,000 hours website’s problem profiles would be something I’d like to watch you do a podcast on, or costly if I end up getting it from people whose work I’m less familiar with. Also, neurology, cogpsych/evopsych/epistasis (e.g. like this 80k podcast with Randy Neese, this 80k podcast with Athena Aktipis), and especially more quantitative modelling approaches to culture change/trends (e.g. 80k podcast with Cass Sunstein, 80k podcast with Tom Moynihan, 80k podcasts with David Duvenaud and Karnofsky). A lot of the intermediate-yet-upstream -type stuff with the AI situation, even deepfakes etc is hard to hear takes from from people who haven’t really established that they do serious thinking.
In the last episode you talk about how you were considering shutting down Redwood and joining labs. Why were you initially considering it and why did you eventually decide against it?
@Ryan Greenblatt and I are going to record another podcast together (see the previous one here). We’d love to hear topics that you’d like us to discuss. (The questions people proposed last time are here, for reference.) We’re most likely to discuss issues related to AI, but a broad set of topics other than “preventing AI takeover” are on topic. E.g. last time we talked about the cost to the far future of humans making bad decisions about what to do with AI, and the risk of galactic scale wild animal suffering.
I would love to hear any updated takes on this post from Ryan.
I’d be interested in seeing you guys elaborate on the comments you make here in response to Rob’s question that some control methods, such as AI boxing, may be “a bit of a dick move”.
Much of the stuff that catches your interest on the 80,000 hours website’s problem profiles would be something I’d like to watch you do a podcast on, or costly if I end up getting it from people whose work I’m less familiar with. Also, neurology, cogpsych/evopsych/epistasis (e.g. like this 80k podcast with Randy Neese, this 80k podcast with Athena Aktipis), and especially more quantitative modelling approaches to culture change/trends (e.g. 80k podcast with Cass Sunstein, 80k podcast with Tom Moynihan, 80k podcasts with David Duvenaud and Karnofsky). A lot of the intermediate-yet-upstream -type stuff with the AI situation, even deepfakes etc is hard to hear takes from from people who haven’t really established that they do serious thinking.
What have you learnt about running organisations and managing from running Redwood?
In the last episode you talk about how you were considering shutting down Redwood and joining labs. Why were you initially considering it and why did you eventually decide against it?