“Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.”
-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.
For 80k, I think I’d really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I’m sure there are many great stories out there, but I think it’s very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding.
My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast’s past guests or most people in EA. Toby Ord’s episode was excellent in this respect; he’s as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted.
Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).
A semi-big name who will probably be easier to get: Jeff Hawkins of Numenta.
A less famous person who might be a good stand-in for Richard Sutton’s perspective on AI is Edan Meyer, an academic AI researcher.
With some research and asking around, you could probably generate more ideas for guests along these lines.
I think one good way to get more clarity on the big picture and stimulate more creative thinking is to bring people into the conversation who have more diverse viewpoints. Even if you were to come at it from the perspective of being 95% certain that LLMs will scale to AGI within 10 years (which AFAIK is a big exaggeration of the 80,000 Hours team’s real views), one really useful part of having guests like this one would be prompting the hosts and the audience to think about why, exactly, these guests are wrong in their LLM skepticism.
I think even in cases where you are 95% sure you’re right, talking to brilliant, eloquent experts who disagree can only serve to sharpen your thinking and put you in a better position to think about and articulate your case. Conversely, I think when you’re only talking to people who agree with you, you don’t develop an ability to make a persuasive case to people who don’t already agree. You take for granted things other people don’t take for granted, and you’re maybe not even aware of other people’s objections, qualms, and concerns. Maybe the most important part of persuasion is showing people you know what they have to say and that you have an answer to it.
A lot of the stated goals in the Google Doc come down to persuasion, so this seems in line with your goals.
Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover—both for the sake of being interesting and more importantly to actually have an impact. I’d love to hear any particular suggestions you have about things that seem underexplored and important to you!
“Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about.”
-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.
For 80k, I think I’d really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I’m sure there are many great stories out there, but I think it’s very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding.
My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast’s past guests or most people in EA. Toby Ord’s episode was excellent in this respect; he’s as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted.
Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).
A semi-big name who will probably be easier to get: Jeff Hawkins of Numenta.
A less famous person who might be a good stand-in for Richard Sutton’s perspective on AI is Edan Meyer, an academic AI researcher.
With some research and asking around, you could probably generate more ideas for guests along these lines.
I think one good way to get more clarity on the big picture and stimulate more creative thinking is to bring people into the conversation who have more diverse viewpoints. Even if you were to come at it from the perspective of being 95% certain that LLMs will scale to AGI within 10 years (which AFAIK is a big exaggeration of the 80,000 Hours team’s real views), one really useful part of having guests like this one would be prompting the hosts and the audience to think about why, exactly, these guests are wrong in their LLM skepticism.
I think even in cases where you are 95% sure you’re right, talking to brilliant, eloquent experts who disagree can only serve to sharpen your thinking and put you in a better position to think about and articulate your case. Conversely, I think when you’re only talking to people who agree with you, you don’t develop an ability to make a persuasive case to people who don’t already agree. You take for granted things other people don’t take for granted, and you’re maybe not even aware of other people’s objections, qualms, and concerns. Maybe the most important part of persuasion is showing people you know what they have to say and that you have an answer to it.
A lot of the stated goals in the Google Doc come down to persuasion, so this seems in line with your goals.
Thanks for all the suggestions!
Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover—both for the sake of being interesting and more importantly to actually have an impact. I’d love to hear any particular suggestions you have about things that seem underexplored and important to you!