I’d like to write something about my skepticism of for-profit models of doing alignment research. I think this is a significant part of why I trust Redwood more than Anthropic or Conjecture.
(This could apply to non-alignment fields as well, but I’m less worried about the downsides of product-focused approaches to (say) animal welfare.)
That said, I would want to search for existing discussion of this before I wade into it.
I’d like to write something about my skepticism of for-profit models of doing alignment research. I think this is a significant part of why I trust Redwood more than Anthropic or Conjecture.
(This could apply to non-alignment fields as well, but I’m less worried about the downsides of product-focused approaches to (say) animal welfare.)
That said, I would want to search for existing discussion of this before I wade into it.