CraigD Posted February 11, 2017 Report Posted February 11, 2017 Non-intrusive brain-computer interfaces like the one described in this post rely on EEG – sensing brain activity via small electric voltages on the scalp. Much simpler systems like those described in this article have used EMG – sensing muscle activity via small voltages in the skin over the muscle – to control prosthetic hands and arms. I wonder is a frankly crude EMG-using system could be used to control a video game avatar to create a deeply immersive sense of virtual reality? Here’s what I have in mind:Stick EMG electrodes to the big muscles of your arms and legs, with a few on the little muscles of your hands. Connect the electrodes to an appropriately programmed computerStrap yourself down securely, as is done to restrain an agitated patient or dangerous person, in a chair or bedDon a VR headset, also connected to the computerYou should now be able to attempt to move your arms and legs normally. The EMG sensors will tell the computer that you are attempting to move, while the chair or bed restraint will prevent you from actually moving. The program, however, can move your in-game avatar realistically, which you’ll be able to see via your VR headset. Of course, your proprioception system will detect that your actual limbs aren’t moving, but presented with the false visual evidence from the VR headset, what would you actually experience? Will you feel like you are playing a VR game while strapped down, or will that sensation be overridden by the sensation that you are actually moving as your avatar appears to be? Quote
BrainJackers Posted February 12, 2017 Report Posted February 12, 2017 That is an interesting concept. I think it would need to be calibrated to the person (different muscle sizes) and you would lose energy a lot faster (with the resistance, of course, you're supposedly using the same energy). Now, you obviously wouldn't be feeling stuff, but you might get imprints where you're held down which wouldn't be pleasant. Due to it being hard to setup, both per use, and the first-use calibration (with possible later calibrations if you're muscles increase), and the issue with being bound, I don't think it's usable for the masses. It would still be an interesting science experiment (which is what you suggested), just to see if you could. My question is "Could you turn around/look behind you?" --@Kayaba Quote
nullspaceM Posted February 12, 2017 Report Posted February 12, 2017 (edited) This is an interesting idea. I would think you don't even need EEG points, you could just allow a small range of movement against transitionally resistive sensors (though now that i think about it, EEG might actually be easier lol) There would be the disconnect between your eyes and vestibular feedback so VR sickness would still be a thing, though I don't think it would be any worse than watching a movie in 3D. I think trying to move your restrained body but seeing it move in the VR system would be weird. Perhaps along the lines of the so called "body transfer illusion". "My question is "Could you turn around/look behind you?"" (idk how to quote single lines) I don't see why not considering that it's just a compound muscle movement (turning the hips/neck and moving the legs) that could be captured by the electrodes. Edited February 12, 2017 by nullspaceM Quote
CraigD Posted February 13, 2017 Author Report Posted February 13, 2017 That is an interesting concept. I think it would need to be calibrated to the person (different muscle sizes) and you would lose energy a lot faster (with the resistance, of course, you're supposedly using the same energy). ... Due to it being hard to setup, both per use, and the first-use calibration (with possible later calibrations if you're muscles increase), and the issue with being bound, I don't think it's usable for the massesI don’t imagine the setup I’m describing – the LTFBNMEMG for lack of a better name – would require much or very complicated, or possibly even any, calibration, because, like nearly every video game control system to date, it wouldn’t adapt to its user, but require its user to adapt to it, something we, with our amazingly flexible nervous systems, are adept at. This facility for immersing ourselves into relationships with environments different than our bodies’ with the ordinary world is what allows us to do things like walk on stilts, ride bikes, drive cars and pilot aircraft, as well as experience present-day videogames with interfaces consisting of just a few finger+thumb-controlled buttons and joysticks, eyes, and ears. Given even the clunkiest interfaces, our brains and extended nervous systems adapt to find a way to allow us to control real physical vehicles and virtual avatars, experiencing them as if they are our own bodies. So, starting with the assumption that present-day video game controllers are actually very good at what they do, and provide an imperfect but enjoyable level of virtual reality immersion – which, for folk focused on the challenge of really building the fictional NerveGear brain-computer interface, may seem counterintuitive – I’m looking with the LTFBNMEMG to merely make a better controller, counting on the users’ nervous systems to do the difficult interfacing work, as they do with present-day controllers. I’d better expand on what I mean by “better”, starting with some discussion of the essential limitations of a standard (eg: PlayStation, Xbox, and various lookalikes for PCs) video game controller. This essential, and obvious, limitation is that they take input only from our fingers and thumbs. As such, they’re excellent for allowing us to control an avatar that is essentially a tank – a vehicle that can move along a surface in any direction, pointing and firing a gun in a direction independent of its motion – giving rise to the popularity of the first person shooter game genre, arguably the only genre (“shooter” optional) suitable for deeply immersive VR. Even when the “vehicle” you’re driving in a 1PS is rendered as a human body, it’s still effectively a tank, facing a single direction while moving a single direction, with a few un-tank-like additions such as the ability to duck, jump, and climb. Standard controllers fail at allowing you to control the movement of individual human body parts. Actions that are literally child’s play in the real world, like hand assembling Lego blocks, crawling, grabbing, punching and kicking, can’t be done realistically in a virtual one using a standard controller. This is because standard controllers only get input from our fingers and thumbs, not our arms and legs. The LTFBNMEMG seeks to remedy this by getting input from these body parts essentially ignored by present-day controllers. ... (different muscle sizes) and you would lose energy a lot faster (with the resistance, of course, you're supposedly using the same energy). Now, you obviously wouldn't be feeling stuff, but you might get imprints where you're held down which wouldn't be pleasant.I’m counting on our nervous systems adapting so that we don’t tire or injure ourselves straining against the restraint, unconsciously adapting to use only as much muscle force as required to move the game avatar as desired, just as we quickly learn that pushing buttons very hard on a standard controller doesn’t make the in-game action they control more forceful. Though I described the LTFBNMEMG’s restraint as like those used “to restrain an agitated patient or dangerous person”, I imagine them as being lighter and not difficult to escape from, so the user would have to learn not to get over-excited and use too much force. My question is "Could you turn around/look behind you?"Since I’m describing an imagined device, I confidently answer “yes”. :) Seriously, “can you turn around” challenge for a LTFBNMEMG system has 2 main parts: first an EMG problem, then a kinesiology problem. Let’s assume the “turning around” action involves a situation where the game avatar is standing or walking. First the system must detect, via an EMG from the user’s leg and pelvic muscles, indicating their intention to lift a set down in a different place and orientation their foot. The voltages of the various EMG electrodes on the strapped-down LTFBNMEMG user will be similar to those attached to a person actually performing the step-and-turn action. Next, these voltages need to inform analogs of the muscles they indicate activity from in a kinesiology model, and this model must compute the effect of this motion on the entire body. If the system works right, then the modeled avatar body will perform the desired step and turn action, and the user’s VR display will show them an image from a direction rotated by the angle computed by the modeling program. Note how this differs from the way you turn and look behind you in a present-day video game. In most present-day game, you turn “like a tank” either by moving the right joystick of a standard controller, or by pointing a pointing device (eg: a PlayStation Move controller) at the edges of the screen. You are essentially using your hands to move a control that functions like the turret-aiming mechanism of a tank. In a LTFBNatural MotionEMG system, you are using your leg muscles to signal a kinesiology model to move your virtual body, a more natural scheme. Earlier, I noted that a key limitation of a standard controller is that it takes input only from your fingers and thumbs (or in the case of the Motion controller, your entire hand). The key advantage of a LTFBNMEMG controller is that it takes inputs from as many of your body parts as it attaches electrodes to. The muscle groups in our hands are really good, I’d say the best, ones, but naturally limited to controlling the motion of our fingers and thumbs. The LTFBNMEMG seeks to bring the other muscle groups into play. This is an interesting idea. I would think you don't even need EEG points, you could just allow a small range of movement against transitionally resistive sensors (though now that i think about it, EEG might actually be easier lol)That EMG would be easier – the “Low Tech” part of the title – than pressure sensing was my thought, also. EMG electrodes and system are old, proven, cheap technology, while pressure sensors are less so. Also, the rigid mechanical setup needed to use pressure sensor to sense the action of the various flexor, extensor, and torsional muscles in our limbs with pressure sensors would need to be elaborate and adjusted to precisely fit the users body. EMG electrodes are simple stick-on affairs. At risk of distracting in this already long thread, the stick-on nature of electrodes brings up a practical issue. The best EMG electrodes are “wet”, meaning they have a bit of conductive gel, and an adhesive patch edge used to stick them securely to your skin. Commercial consumer electrode-using products have either used dry electrodes (eg: the Mattel MindFlex) or felt-covered ones you wet with saline solution before use (eg: the Emotiv EPOC). While the EPOC’s saline electrodes improve much on dry ones, it’s still not as good as the gel-and-adhesive electrode (see this BioMed Central article). So while it’s tempting to make a LTFBNMEMG using dry or saline electodes attached to the inside of stretchy garment - you can quickly take it on and off, and reuse it for a long time, this would be giving up a lot of sensitivity and accuracy. I think trying to move your restrained body but seeing it move in the VR system would be weird. Perhaps along the lines of the so called "body transfer illusion".I agree – I think it would, at first, be deeply weird, but not weirder than how it first feels to move and aim your point of view in a 1PS while sitting still and controlling everything with your thumbs. My biggest worry with the LTFBNMEMG is that some of the arm-related muscle groups work naturally only when the arms are in different positions – for example, over-head vs forward vs to the sides – so the system might not be able to model some body positions as well as others, or at all. As I said above, I’m very much counting on the nervous system’s ability to adapt to fix the limitations of the system. I don’t see any way to test if this is right or not than to actually build and test the system. "My question is "Could you turn around/look behind you?"" (idk how to quote single lines) I don't see why not considering that it's just a compound muscle movement (turning the hips/neck and moving the legs) that could be captured by the electrodes.I was thinking, with an eye to the short term practical, that the head wouldn’t be restrained, and the system would use a COTS system like a HTC Vive. Since inaccuracy in eye direction tracking is the leading cause of VR sickness, we’d want the most responsive and accurate possible system for it. It’s hard to get much more accurate than allowing the head to move and physically tracking it. Quote
BrainJackers Posted February 13, 2017 Report Posted February 13, 2017 This becomes a lot simpler when humans adapt to it; not the other way. If you were using it and then switching to normal walking, that could be very disorientating at first. Learning to walk with slight muscle movements and then having to actually walk. You couldn't make the bonds too weak (what if a person got excited, bonds ripped off, person fell off, headset fell on ground and broke, and worst of all... thye player's rank goes down for losing the match?!). I feel turning on the spot would be awkward. Lift leg, turn foot, do same with other is a 90 degree turn (roughly; could be used for 180 if you really wanted to) but if your legs are held down... I guess it does depend on how well the avatar shows your actions and how well the effect of thinking the avatar's body is yours is. --@Kayaba Quote
formad Posted March 19, 2017 Report Posted March 19, 2017 Can I ask for the scientific defenition of EEG?? Quote
CraigD Posted March 19, 2017 Author Report Posted March 19, 2017 Welcome to hypography, formad! :) Please feel free to start a topic in the introductions forum to tell us something about yourself. Can I ask for the scientific defenition of EEG??EEG is an abbreviation of ElectroEncephaloGraphy. “Electro” means “using electricity”, “Encephalo” means “head”, and “graphy” means “to make a picture”. So EEG means “making a picture of the head using electicity”. The part of the head EEG is interested in is mostly the brain. EMG is an abbreviation of ElectroMyoGraphy. “Myo” means “muscle”, so EMG means “making a picture of muscles using electicity”. EMG can be used to measure the activity of practically any muscle, though the closer to the skin the muscle is, the more accurately EMG can measure it. Hyperlinks are your best friend on the World Wide Web :) – click on them for lots more information on EEG and EMG. In this thread, I suggest that rather than attempting to build a deeply immersive VR system by reading the brain with EEG in order to measure which muscles the brain is commanding to move, it would be much easier to read the muscles themselves with EMG. To follow this argument, it’s key to understand the difference and similarities between EEG and EMG. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.