What is lacking in VR input

Playing with VR tools, in particular the Leap Motion, makes some limitations clear very quickly.

The biggest and almost universal shortcoming in apps which make use of the Leap Motion is their lack of physics. And this is most apparent, and troublesome, when your virtual hand comes into contact with a virtual object.

We spend all our lives learning (and sometimes relearning) the physics of the real world. When I push my hand against this wall, it does not go through it. When I grasp this tool, my fingers do not go through it. Then we start using VR demos and the like, and magically our hand and fingers will pass through anything, in total defiance of the real world. It is bad enough not having tactile feedback, but when your brain cannot believe what it is seeing, it is simply unreal.

For example, when playing Leap Motion’s otherwise delightful Chess demo (an excellent way of getting the hang of using Leap Motion), you can pass your hand through the chess pieces. This is neat in some ways – you do not keep knocking the pieces over with your clumsiness – but distractingly unreal. When you do want to grasp a piece, it makes it far harder.

Underlying this is a limitation of OpenGL for this type of VR: the letter G. OpenGL is a graphics library, which uses the GPU (that letter again) to render graphics. So the app has to do all the work (which duplicates the hidden line and collision routines in OpenGL, running so swiftly on the GPU) to determine when your finger comes into contact with the surface of a solid.

We really need an OpenVRL if we are to make VR commonplace.