Thank you, this is super helpful! I appreciate it.
Yes, good point, if inner misalignment would emerge from an ML system, then any data source that was used for training would be ignored by the system anyways.
Depends on if you think alignment is a problem for the humanities or for engineering.
Thank you, this is super helpful! I appreciate it.
Yes, good point, if inner misalignment would emerge from an ML system, then any data source that was used for training would be ignored by the system anyways.
Depends on if you think alignment is a problem for the humanities or for engineering.