If you were a researcher at Zeusfyi; here’s what our chief scientist would advise:
1. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers. The scalable alignment team at OpenAI has all of ~7 people.
Even a team 7 is more than sufficient to solve scalable alignment; the problem is stemming from lack of belief in your self; your own cause and ability to solve due to false belief in it being a resource issue. In general when you solve unknowns you need wider perspective + creative IQ which is not taught in any school, likely the opposite honestly; aka systems level thinkers who can relate X field to Y field and create solutions to unknowns from subknowns; most people are afraid to “step on toes” or whatever subdivision they live in; if you wanna do great research you need to be more selfish in that way
2. You can’t solve alignment if you can’t even define and measure intelligence generality; you can’t solve what you don’t know.
3. There’s only one reason intelligence exists; if we lived in a universe that had physics that could “lie“ to you; and make up energy/rules, then nothing is predictable nor periodic, nothing can be generalized
If you were a researcher at Zeusfyi; here’s what our chief scientist would advise:
1. That’s a ratio of ~300:1, capabilities researchers:AGI safety researchers. The scalable alignment team at OpenAI has all of ~7 people.
Even a team 7 is more than sufficient to solve scalable alignment; the problem is stemming from lack of belief in your self; your own cause and ability to solve due to false belief in it being a resource issue. In general when you solve unknowns you need wider perspective + creative IQ which is not taught in any school, likely the opposite honestly; aka systems level thinkers who can relate X field to Y field and create solutions to unknowns from subknowns; most people are afraid to “step on toes” or whatever subdivision they live in; if you wanna do great research you need to be more selfish in that way
2. You can’t solve alignment if you can’t even define and measure intelligence generality; you can’t solve what you don’t know.
3. There’s only one reason intelligence exists; if we lived in a universe that had physics that could “lie“ to you; and make up energy/rules, then nothing is predictable nor periodic, nothing can be generalized
You now have the tools to solve it