Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity
You know much more than I do, but I’m surprised by this take. My sense is that Anthropic is giving a lot back:
funding
My understanding is that all early investors in Anthropic made a ton of money, it’s plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply they’ll give much more funding than they got. (Of course in EV, it could still go to zero)
staff
I don’t see why hiring people is more “taking” than “giving”, especially if the hires get to work on things that they believe are better for the world than any other role they could work on
and doesn’t contribute anything back
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I don’t know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be “giving back” even more to EA, but I’m skeptical that it would be the most cost-effective use of their resources (including time and brand value)
Great points, I don’t want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people aren’t aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.
Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity
You know much more than I do, but I’m surprised by this take. My sense is that Anthropic is giving a lot back:
My understanding is that all early investors in Anthropic made a ton of money, it’s plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply they’ll give much more funding than they got. (Of course in EV, it could still go to zero)
I don’t see why hiring people is more “taking” than “giving”, especially if the hires get to work on things that they believe are better for the world than any other role they could work on
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I don’t know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be “giving back” even more to EA, but I’m skeptical that it would be the most cost-effective use of their resources (including time and brand value)
Great points, I don’t want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people aren’t aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.