Virtual Reality is a sensory input system that transports the user into a virtual world. Current technology uses head-mounted displays as the primary means of sensory input. These displays use position and orientation tracking of various levels of sophistication to immerse the user in the virtual world and provide a sense of physical presence. Hand tracking can amplify that sense of presence.
Since everyone has a natural, physical sense where their own hands are in space, it is important to align the physical and the virtual worlds. You can achieve this alignment by placing the LeapHandController in the virtual location corresponding to its real-world location. In VR, the common frame of reference between the physical and the virtual world is the HMD.
As part of its VR Support, Unity controls the scene cameras to match the movement of the HMD within its tracking area. The LMHeadMountedRig prefab uses the camera location provided by Unity to place the LeapHandController at the correct position in the virtual world. The coordinates in the tracking data are then transformed from Leap space to Unity space relative to the position and orientation of the LeapHandController game object.
If you use an HMD SDK directly, rather than the built-in Unity VR Support, you may need to add the Leap Motion components to an existing camera rig. This is described in Building a Custom VR Rig.
The primary VR Headsets available at this time include the desktop HMDs, the Oculus Rift and the Valve Vive, as well as Android devices in a Cardboard, GearVr, or similar mount.
Among the desktop systems, the differences important to Leap Motion development are small to nonexistent. If you mount the peripheral at the center of the HMD, looking directly forward, then the standard Unity prefabs should work fine for both the Rift and the Vive.
The Android SDK is still in development. The primary difference between mobile and desktop development is the lower computing power available on mobile platforms. This lower computational ability makes issues of frame rate and latency more important since there is simply less head room for every part of an application’s CPU and GPU budgets.
One small difference in behavior among different headsets is how they determine the height of the camera point-of-view when a scene is loaded. The Oculus Rift, for example, sets the height based on the position of the HMD within its head-tracking volume when the scene starts. The Valve Vive, on the other hand, places sets the camera height to the HMD’s actual position above the floor. To create a scene that works for both, then you must compensate for the possible difference in starting height. This is not essentially a Leap Motion-related topic, but we do include a utility script, VRHeightOffset.cs, that adjusts the height of the LMHeadMountedRig when the scene starts. You can set heights per HMD type.
If you are familiar with developing for the Leap Motion device in a desktop context, consider the following differences in the VR context: