Sorry for the inefficient description that may sound confusing. It probably makes it faster for the user to deliver the command to the computer since she doesn’t need to use her hands. moreover, it also aims for making human interaction with the computer more intuitive. :)
Sorry for the inefficient description that may sound confusing. It probably makes it faster for the user to deliver the command to the computer since she doesn’t need to use her hands. moreover, it also aims for making human interaction with the computer more intuitive. :)
I know there’s:
eye-movement cursors
tongue controllers
reflective forehead beads for cameras to read and target a mouse on-screen
foot pedals for mouse buttons
voice control of keyboard and mouse
body tracking with a camera
head control with head movements and face gestures
and apparently some EEG controllers suitable for a project like you describe
I have thought about using the different options, and some of the concerns are:
having to port around peripherals when working in different spots
being unable to set up a peripheral in a particular spot
being locked into using a set-up at a specific location
being forced to make smaller body movements when I prefer larger
setting off the device when I don’t intend to use it
fatigue of whatever’s being used (voice, tongue, neck)
inefficient use for specific purposes (for example, drawing)
If you streamline the controls required for the specific application, for example, web-browsing, then any peripheral options become better.
People use the mouse when they would be better off using keyboard shortcuts or some add-in or even another software program.
With the right software or configuration, any solution becomes more useful or attractive.
Thank you. It’s very helpful to read this thread