Hi!
I’m currently (Aug 2023) a Software Developer at Giving What We Can, helping make giving significantly and effectively a social norm.
I’m also a forum mod, which, shamelessly stealing from Edo, “mostly means that I care about this forum and about you! So let me know if there’s anything I can do to help.”
Please have a very low bar for reaching out!
I won the 2022 donor lottery, happy to chat about that as well
You know much more than I do, but I’m surprised by this take. My sense is that Anthropic is giving a lot back:
My understanding is that all early investors in Anthropic made a ton of money, it’s plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply they’ll give much more funding than they got. (Of course in EV, it could still go to zero)
I don’t see why hiring people is more “taking” than “giving”, especially if the hires get to work on things that they believe are better for the world than any other role they could work on
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I don’t know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be “giving back” even more to EA, but I’m skeptical that it would be the most cost-effective use of their resources (including time and brand value)