Great post for thinking critically about MI.
MI research is only in its beginning stages, and many questions about the inner workings of the model have yet to be answered. As a result, I expect that for now, to make the research less difficult, we are currently setting aside interactions with humans and the broader environment.
Ultimately, however, these factors are all very important. Another explainability study, XAI, has increasingly focused on the significance of human-AI interactions and has proposed a human-centered approach. I’m waiting to see how MI expands its scope to cover this issue. Also, I have been wondering how existing research on transparency can flexibly adapt to its rapidly evolving form if neural networks (or more specifically, the transformers that have been the focus of MI research) are not the final form of AGI?
Have read about the basic idea about ‘troll for good’.
Feel skepical about how it will work in reality: while giant tech are invincible, trolls might only kill upcoming startups and innovations.