Fair points. I was more thinking in broad terms of supporting something that will most likely turn out hugely negative. I think it’s pretty clear already that Anthropic is massively negative expected value for the future of humanity. And we’ve already got the precedent of OpenAI and how that’s gone (and Anthropic seems to be going the same way in broad terms—i.e. not caring about endangering 8 billion people’s lives with reckless AGI/ASI development).
Fair points. I was more thinking in broad terms of supporting something that will most likely turn out hugely negative. I think it’s pretty clear already that Anthropic is massively negative expected value for the future of humanity. And we’ve already got the precedent of OpenAI and how that’s gone (and Anthropic seems to be going the same way in broad terms—i.e. not caring about endangering 8 billion people’s lives with reckless AGI/ASI development).