Attempt to simulate an artificial general intelligence using Ouijably
Low-odds it works, but I thought if you could put enough people on a spirit board it might exhibit behaviour similar to an oracle-type AGI. This implementation(https://github.com/ably-labs/ouija) means it wouldn’t take much organising to attempt. Maybe tweak so participants are predicting the direction the planchette will move rather than relying on the ideomotor effect. I thought the idea would be outside of the rationalist’s window of consideration as something with a spiritualist bent . If it did work it would probably be the safest form of AGI since something made of humans should have the best chance of being human-friendly
Potential Test Case for AGI
Attempt to simulate an artificial general intelligence using Ouijably
Low-odds it works, but I thought if you could put enough people on a spirit board it might exhibit behaviour similar to an oracle-type AGI. This implementation(https://github.com/ably-labs/ouija) means it wouldn’t take much organising to attempt. Maybe tweak so participants are predicting the direction the planchette will move rather than relying on the ideomotor effect. I thought the idea would be outside of the rationalist’s window of consideration as something with a spiritualist bent . If it did work it would probably be the safest form of AGI since something made of humans should have the best chance of being human-friendly