Thanks, Vasco! I like stampi.ai, but it doesn’t really get at what I want. I want to see key AI safety ideas and claims being stress tested (e.g., via discussion) by very smart people with different intuitions. Ideally these exchanges would be curated and summarised and well promoted so that people can easily find and read them (e.g., in an FAQ).
Stampi is good, but it seems to serve a different purpose of introducing the ideas and opportunities within AIS. IMHO it mainly gives explanations for relatively simple questions. In such explanations it generally explains what different people (e.g., Eliezer) claim.
Isolated, unsynthesised claims, by smart people aren’t very compelling/helpful for me right now. I am already confident that some people smarter than me are really worried about AI for what appear to be good reasons. But I am also confident that other people who are smarter than I am are less worried than Eliezer or not worried at all for what also appear to be good reasons.
I don’t have easy ways to resolve these uncertainties (short of doing some sort of literature review) which makes it hard for me to determine how worried I should be about AI and what I should therefore do.
This is why I want to see Eliezer engage with many more smart and technical people who disagree with his views, to see what criticisms and cruxes emerge and what results. I feel that this would reduce my uncertainty or at least take me close to understanding the underlying intuitions/assumptions that underpin the difference in expert predictions.
I’ll post some of my questions about artificial general intelligence safety in the future. Thanks for offering to help.
Thanks, Vasco! I like stampi.ai, but it doesn’t really get at what I want. I want to see key AI safety ideas and claims being stress tested (e.g., via discussion) by very smart people with different intuitions. Ideally these exchanges would be curated and summarised and well promoted so that people can easily find and read them (e.g., in an FAQ).
Stampi is good, but it seems to serve a different purpose of introducing the ideas and opportunities within AIS. IMHO it mainly gives explanations for relatively simple questions. In such explanations it generally explains what different people (e.g., Eliezer) claim.
Isolated, unsynthesised claims, by smart people aren’t very compelling/helpful for me right now. I am already confident that some people smarter than me are really worried about AI for what appear to be good reasons. But I am also confident that other people who are smarter than I am are less worried than Eliezer or not worried at all for what also appear to be good reasons.
I don’t have easy ways to resolve these uncertainties (short of doing some sort of literature review) which makes it hard for me to determine how worried I should be about AI and what I should therefore do.
This is why I want to see Eliezer engage with many more smart and technical people who disagree with his views, to see what criticisms and cruxes emerge and what results. I feel that this would reduce my uncertainty or at least take me close to understanding the underlying intuitions/assumptions that underpin the difference in expert predictions.
I’ll post some of my questions about artificial general intelligence safety in the future. Thanks for offering to help.