As somehow who works on AGI safety and cares a lot about it, my main conclusion from reading this is: it would be ideal for you to work on something other than AGI safety! There are plenty of other things to work on that are important, both within and without EA, and a satisfactory resolution to “Is AI risk real?” doesn’t seem essential to usefully pursue other options.
Nor do I think this is a block to comfortable behavior as an EA organizer or role model: it seems fine to say “I’ve thought about X a fair amount but haven’t reached a satisfactory conclusion”, and give people the option of looking into it themselves or not. If you like, you could even say “a senior AGI safety person has given me permission to not have a view and not feel embarrassed about it.”
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
That is also very reasonable! I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don’t have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent.
(I would offer to chat directly, as I’m very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)
it would be ideal for you to work on something other than AGI safety!
I disagree. Here is my reasoning:
Many people that have extensive ML knowledge are not working on safety because either they are not convinced of its importance or because they haven’t fully wrestled with the issue
In this post, Ada-Maaria articulated the path to her current beliefs and how current AI safety communication has affected her.
She has done a much more rigorous job of evaluating the pervasiveness of these arguments than anyone else I’ve read
If she continues down this path she could either discover what unstated assumptions the AI safety community has failed to communicate or potentially the actual flaws in the AI safety argument.
This will either make it easier for AI Safety folks to express their opinions or uncover assumptions that need to be verified.
On the one hand I agree with this being very likely the most prudent action from OP to take from her perspective, and probably the best action for the world as well. On the other, I think I feel a bit sad to miss some element of...combativeness(?)… in my perhaps overly-nostalgic memories of the earlier EA culture, where people used to be much more aggressive about disagreements with cause and intervention prioritizations.
It feels to me that people are less aggressive about disagreeing with established consensus or strong viewpoints that other EAs have, and are somewhat more “live and let live” about both uses of money and human capital. I sort of agree with this being the natural evolution of our movement’s emphases (longtermism is harder to crisply argue about than global health, money is more liquid/fungible than human capital). But I think I feel some sadness re: the decrease in general combativeness and willingness to viciously argue about causes.
This is related to an earlier post about the EA community becoming a “big tent,” which at the time I didn’t agree with but now I’m warning up to.
I think the key here is that they’ve already spent quite a lot of time investigating the question. I would have a different reaction without that. And it seems like you agree my proposal is best both for the OP and the world, so perhaps the real sadness is about the empirical difficulty at getting people to consensus?
At a minimum I would claim that there should exist some level of effort past which you should not be sad not arguing, and then the remaining question is where the threshold is.
“a senior AGI safety person has given me permission to not have a view and not feel embarrassed about it.”
For a lack of a better word, this sound cultish to me, why would one need permission “from someone senior” to think or feel anything? If someone said this to me it would be a red flag about the group/community.
I think your first suggestion (“I’ve thought about X a fair amount but haven’t reached a satisfactory conclusion”) sounds much more reasonable, if OP feels like that reflects their opinion. But I also think that something like “I don’t personally feel convinced by the AGI risk arguments, but many others disagree, I think you should read up on it more and reach your own conclusions”, is much more reasonable than your second suggestion. I think we should welcome different opinions, as long as someone agrees with the main EA principles they are an EA, it should not be about agreeing completely with cause A, B and C.
Sorry if I am over-interpreting your suggestion as implying much more than you meant, I am just giving my personal reaction.
Yep, that’s very fair. What I was trying to say was that if in response to the first suggestion someone said “Why aren’t you deferring to others?” you could use that as a joke backup, but agreed that it reads badly.
As somehow who works on AGI safety and cares a lot about it, my main conclusion from reading this is: it would be ideal for you to work on something other than AGI safety! There are plenty of other things to work on that are important, both within and without EA, and a satisfactory resolution to “Is AI risk real?” doesn’t seem essential to usefully pursue other options.
Nor do I think this is a block to comfortable behavior as an EA organizer or role model: it seems fine to say “I’ve thought about X a fair amount but haven’t reached a satisfactory conclusion”, and give people the option of looking into it themselves or not. If you like, you could even say “a senior AGI safety person has given me permission to not have a view and not feel embarrassed about it.”
Thanks for giving me permission, I guess can use this if I need ever the opinion of “the EA community” ;)
However, I don’t think I’m ready to give up on trying to figure out my stance on AI risk just yet, since I still estimate it is my best shot in forming a more detailed understanding on any x-risk, and understanding x-risks better would be useful for establishing better opinions on other cause priorization issues.
That is also very reasonable! I think the important part is to not feel to bad about the possibility of never having a view (there is a vast sea of things I don’t have a view on), not least because I think it actually increases the chance of getting to the right view if more effort is spent.
(I would offer to chat directly, as I’m very much part of the subset of safety close to more normal ML, but am sadly over capacity at the moment.)
I disagree. Here is my reasoning:
Many people that have extensive ML knowledge are not working on safety because either they are not convinced of its importance or because they haven’t fully wrestled with the issue
In this post, Ada-Maaria articulated the path to her current beliefs and how current AI safety communication has affected her.
She has done a much more rigorous job of evaluating the pervasiveness of these arguments than anyone else I’ve read
If she continues down this path she could either discover what unstated assumptions the AI safety community has failed to communicate or potentially the actual flaws in the AI safety argument.
This will either make it easier for AI Safety folks to express their opinions or uncover assumptions that need to be verified.
Either would be valuable!
On the one hand I agree with this being very likely the most prudent action from OP to take from her perspective, and probably the best action for the world as well. On the other, I think I feel a bit sad to miss some element of...combativeness(?)… in my perhaps overly-nostalgic memories of the earlier EA culture, where people used to be much more aggressive about disagreements with cause and intervention prioritizations.
It feels to me that people are less aggressive about disagreeing with established consensus or strong viewpoints that other EAs have, and are somewhat more “live and let live” about both uses of money and human capital. I sort of agree with this being the natural evolution of our movement’s emphases (longtermism is harder to crisply argue about than global health, money is more liquid/fungible than human capital). But I think I feel some sadness re: the decrease in general combativeness and willingness to viciously argue about causes.
This is related to an earlier post about the EA community becoming a “big tent,” which at the time I didn’t agree with but now I’m warning up to.
I think the key here is that they’ve already spent quite a lot of time investigating the question. I would have a different reaction without that. And it seems like you agree my proposal is best both for the OP and the world, so perhaps the real sadness is about the empirical difficulty at getting people to consensus?
At a minimum I would claim that there should exist some level of effort past which you should not be sad not arguing, and then the remaining question is where the threshold is.
(I’m happy to die on the hill that that threshold exists, if you want a vicious argument. :))
edit: I don’t have a sense of humor
“a senior AGI safety person has given me permission to not have a view and not feel embarrassed about it.”For a lack of a better word, this sound cultish to me, why would one need permission “from someone senior” to think or feel anything? If someone said this to me it would be a red flag about the group/community.I think your first suggestion (“I’ve thought about X a fair amount but haven’t reached a satisfactory conclusion”) sounds much more reasonable, if OP feels like that reflects their opinion. But I also think that something like “I don’t personally feel convinced by the AGI risk arguments, but many others disagree, I think you should read up on it more and reach your own conclusions”, is much more reasonable than your second suggestion. I think we should welcome different opinions, as long as someone agrees with the main EA principles they are an EA, it should not be about agreeing completely with cause A, B and C.Sorry if I am over-interpreting your suggestion as implying much more than you meant, I am just giving my personal reaction.Disclaimer: long time lurker, first time poster.Yep, that’s very fair. What I was trying to say was that if in response to the first suggestion someone said “Why aren’t you deferring to others?” you could use that as a joke backup, but agreed that it reads badly.
Makes a lot of sense :D I just didn’t get the joke, which I in hindsight probably should have… :P