Then the EA movement formed, and while it was originally focused on causes like global poverty, over time did a bunch of investigative work which led many EAs to become convinced that AI safety matters, and to start working on it directly or indirectly.
Is this a statement that you’re endorsing, or is it part of what you’re questioning? Are you aware of any surveys or any other evidence supporting this? (I’d accept “most people in AI safety that I know started working in it because EA investigative work convinced them that AI safety matters” or something of that nature.)
b) how much should I update when I learn that a given belief is a consensus in EA?
Why are you trying to answer this, instead of “How should I update, given the results of all available investigations into AI safety as a cause area?” In other words, what is the point of dividing such investigations into “EA” and “not EA”, if in the end you just want to update on all of them to arrive at a posterior? Oh, is it because if a non-EA concludes that AI safety is not a worthwhile cause, it might just be because they don’t care much about the far future, so EA investigations are more relevant? But if so, why only “partially count” Nick?
Here EAs who started off not being inclined towards transhumanism or rationalism at all count the most, and Nick counts very little.
For this question then, it seems that Paul Christiano also needs to be discounted (and possibly others as well but I’m not as familiar with them).
Are you aware of any surveys or any other evidence supporting this? (I’d accept “most people in AI safety that I know started working in it because EA investigative work convinced them that AI safety matters” or something of that nature.)
I’m endorsing this, and I’m confused about which part you’re skeptical about. Is it the “many EAs” bit? Obviously the word “many” is pretty fuzzy, and I don’t intend it to be a strong claim. Mentally the numbers I’m thinking of are something like >50 people or >25% of committed (or “core”, whatever that means) EAs. Don’t have a survey to back that up though. Oh, I guess I’m also including people currently studying ML with the intention of doing safety. Will edit to add that.
Why are you trying to answer this, instead of “How should I update, given the results of all available investigations into AI safety as a cause area?”
There are other questions that I would like answers to, not related to AI safety, and if I trusted EA consensus, then that would make the process much easier.
For this question then, it seems that Paul Christiano also needs to be discounted (and possibly others as well but I’m not as familiar with them).
I’m endorsing this, and I’m confused about which part you’re skeptical about.
I think I interpreted your statement as saying something like most people in AI safety are EAs, because you started with “One very brief way you could describe the development of AI safety” which I guess made me think that maybe you consider this to be the main story of AI safety so far, or you thought other people considered this to be the main story of AI safety so far and you wanted to push against that perception. Sorry for reading too much / the wrong thing into it.
There are other questions that I would like answers to, not related to AI safety, and if I trusted EA consensus, then that would make the process much easier.
Ok I see. But there may not be that much correlation between the trustworthiness of EA consensus on different topics. It could easily be the case that EA has done a lot of good investigations on AI safety but very little or poor quality investigations on other topics. It seems like it wouldn’t be that hard to just look at the actual investigations for each topic, rather than rely on some sense of whether EA consensus is overall trustworthy.
Is this a statement that you’re endorsing, or is it part of what you’re questioning? Are you aware of any surveys or any other evidence supporting this? (I’d accept “most people in AI safety that I know started working in it because EA investigative work convinced them that AI safety matters” or something of that nature.)
Why are you trying to answer this, instead of “How should I update, given the results of all available investigations into AI safety as a cause area?” In other words, what is the point of dividing such investigations into “EA” and “not EA”, if in the end you just want to update on all of them to arrive at a posterior? Oh, is it because if a non-EA concludes that AI safety is not a worthwhile cause, it might just be because they don’t care much about the far future, so EA investigations are more relevant? But if so, why only “partially count” Nick?
For this question then, it seems that Paul Christiano also needs to be discounted (and possibly others as well but I’m not as familiar with them).
I’m endorsing this, and I’m confused about which part you’re skeptical about. Is it the “many EAs” bit? Obviously the word “many” is pretty fuzzy, and I don’t intend it to be a strong claim. Mentally the numbers I’m thinking of are something like >50 people or >25% of committed (or “core”, whatever that means) EAs. Don’t have a survey to back that up though. Oh, I guess I’m also including people currently studying ML with the intention of doing safety. Will edit to add that.
There are other questions that I would like answers to, not related to AI safety, and if I trusted EA consensus, then that would make the process much easier.
Indeed, I agree.
I think I interpreted your statement as saying something like most people in AI safety are EAs, because you started with “One very brief way you could describe the development of AI safety” which I guess made me think that maybe you consider this to be the main story of AI safety so far, or you thought other people considered this to be the main story of AI safety so far and you wanted to push against that perception. Sorry for reading too much / the wrong thing into it.
Ok I see. But there may not be that much correlation between the trustworthiness of EA consensus on different topics. It could easily be the case that EA has done a lot of good investigations on AI safety but very little or poor quality investigations on other topics. It seems like it wouldn’t be that hard to just look at the actual investigations for each topic, rather than rely on some sense of whether EA consensus is overall trustworthy.