I appreciate posts that provide concise comparative overviews of complex concepts. I have some questions that may seem basic to some, but I’d love to receive answers nonetheless.
OpenPhil, DeepMind, and Meta are leading labs in AI development, both empirically (e.g., ChatGPT) and financially (in terms of resources). China is known for its ability to replicate existing research rather than creating it. Given the concerns about AI and AGI development, particularly the risks of extinction, why do these American and British labs continue their AI work without pausing? Is there external pressure from governments or other nations that might be hostile? I’m trying to understand if there are motivations beyond just capitalizing on AI’s current momentum, similar to some scientists during the development of the A-bomb who pursued it for personal fame and scientific curiosity while disregarding risks.
Additionally, although this may not directly relate to your post, have we considered that the emphasis on AI safety, while creating more jobs in that field, might actually stimulate AI growth and increase the risks of extinction? There’s a shared sentiment in the Effective Altruism (EA) community that more people are joining out of interest in AI (safety or otherwise), as it serves as a hub for discussions and funding related to AI. These newcomers might face a dilemma: Are they willing to work for the greater good, even if it means pausing AI development and potentially affecting their livelihoods? How committed are they to their values when it comes to reducing job opportunities and growth in their passionate field? I apologize if this isn’t the ideal platform for these discussions, but they are infrequently addressed in the forum, and I thought they might relate to the topic of talent in AI.
Edit : all I do is asking genuine questions and I’m being downvoted to hell. If you disagree with the usefulness of the questions tick the ‘I disagree’ box(and even that why do care that my question are being answered?), but downvoting me just screams ‘I refuse criticism on this topic and such questions should’nt be answered’. Which is not honest nor rational, and I’m quite sure that those who downvoted me pride themselves a great deal of being overly rational.
I appreciate posts that provide concise comparative overviews of complex concepts. I have some questions that may seem basic to some, but I’d love to receive answers nonetheless.
OpenPhil, DeepMind, and Meta are leading labs in AI development, both empirically (e.g., ChatGPT) and financially (in terms of resources). China is known for its ability to replicate existing research rather than creating it. Given the concerns about AI and AGI development, particularly the risks of extinction, why do these American and British labs continue their AI work without pausing? Is there external pressure from governments or other nations that might be hostile? I’m trying to understand if there are motivations beyond just capitalizing on AI’s current momentum, similar to some scientists during the development of the A-bomb who pursued it for personal fame and scientific curiosity while disregarding risks.
Additionally, although this may not directly relate to your post, have we considered that the emphasis on AI safety, while creating more jobs in that field, might actually stimulate AI growth and increase the risks of extinction? There’s a shared sentiment in the Effective Altruism (EA) community that more people are joining out of interest in AI (safety or otherwise), as it serves as a hub for discussions and funding related to AI. These newcomers might face a dilemma: Are they willing to work for the greater good, even if it means pausing AI development and potentially affecting their livelihoods? How committed are they to their values when it comes to reducing job opportunities and growth in their passionate field? I apologize if this isn’t the ideal platform for these discussions, but they are infrequently addressed in the forum, and I thought they might relate to the topic of talent in AI.
Edit : all I do is asking genuine questions and I’m being downvoted to hell. If you disagree with the usefulness of the questions tick the ‘I disagree’ box(and even that why do care that my question are being answered?), but downvoting me just screams ‘I refuse criticism on this topic and such questions should’nt be answered’. Which is not honest nor rational, and I’m quite sure that those who downvoted me pride themselves a great deal of being overly rational.