Doesn’t the $67 billion number cited for capabilities include a substantial amount of work being put into reliability, security, censorship, monitoring, data protection, interpretability, oversight, clean dataset development, alignment method refinement, and security? At least anecdotally the AI work I see at my non-alignment-related job mainly falls under these sorts of things.
You are completely right. My main point is that the field of AI safety is under-utilizing commercial markets while commercial AI indeed prioritizes reliability and security to a healthy level.
Doesn’t the $67 billion number cited for capabilities include a substantial amount of work being put into reliability, security, censorship, monitoring, data protection, interpretability, oversight, clean dataset development, alignment method refinement, and security? At least anecdotally the AI work I see at my non-alignment-related job mainly falls under these sorts of things.
You are completely right. My main point is that the field of AI safety is under-utilizing commercial markets while commercial AI indeed prioritizes reliability and security to a healthy level.