New Features in LPVR Version 4.8

Introduction

Our LPVR series is the primary solution on the market for users who want to expand the scope of their virtual reality or mixed reality headsets by using external tracking systems such as ART, OptiTrack or Vicon. Use cases are varied and range from entertainment (location based VR) and engineering use cases (ergonomic studies in AR) to helicopters and virtual cars which are actually driving on Japan’s public roads. At LP-Research, we have continuously developed the LPVR series of solutions over the past years. We have expanded its scope, added support for new headsets, and included new functions.

The image below shows an LPVR installation based on design content created by automotive prototyping company Phiaro Inc. in Tokyo, Japan.

The latest release is version 4.8.0, which we released in June of 2023.  As usual, it comes in two flavors:

  • LPVR-CAD which supports stationary use-cases, and
  • LPVR-DUO which is our variant for moving platforms, be they cars or simulators.

We support all the major tethered headsets (SteamVR headsets, Pimax, Varjo).  We also support Meta Quest series headsets and the Vive Focus 3 with our LPVR-Air series of products. If you have a current support contract, you are eligible for an update.

A brief overview of LPVR-CAD and LPVR-DUO

It’s maybe best to summarize some of the capabilites that our products add to the various commercial headsets.  For more details, feel free to visit the product pages for LPVR-CAD and LPVR-DUO, respectively:

  • Cover arbitrary large areas and have VR scenes taking place in them
  • Have an arbitrary number of users interact in such a space
  • Do VR/AR inside a car or any other moving platform
  • Track your user to sub-millimeter precision together with any number of props with no perceivable latency
  • Use SteamVR controllers without the Lighthouses

We can do this because our proprietary sensor fusion algorithms allow us to combine the measurements of high-precision motion tracking camera systems with the measurements of the headset’s Inertial Measurement Unit (IMU). For the case of a moving platform, we can additionally incorporate data from an IMU installed on the platform to provide for a responsive, accurate performance also in those circumstances.

New Features

For a short overview of the changes in each version, please refer to our Release Notes. Here we will give some highlights and dig into some details. LPVR 4.8.0 is the result of continuous development in the half year or so since our previous releases.

New GUI organization and completely graphical LPVR-DUO configuration

The most obvious change to users will be the reorganized GUI which streamlines the setup, completely doing away with the need to enter any JSON codes, while coming on a more cleanly organized surface. Especially for our LPVR-DUO users this means a vast simplification of the system.  We have maintained the old configuration interface as an option to guarantee compatibility with existing workflows, but we don’t think that users will have to resort to it. Please let us know if your experience is different. If your headset tracking body is already calibrated, you should now be able to setup LPVR-DUO with some five mouse clicks.

When you load up the configuration, it will look something like this. Note that you no longer are led to a JSON editor where you manually have to enter the configuration. Instead you are greeted by a friendly, informative GUI.

At the bottom of the page, you will see links to the Documentation, a Calibration screen, and an Expert Mode, basically the old JSON editor. The Calibration screen is used for the setup of the Platform IMU and simplifies it down to a few mouse clicks in the usual case. No more looking for some quaternion values in log files! Please check out the corresponding documentation.

Varjo headset eye point adjustments

Together with Varjo and with cooperation of several of our customers we were able to identify and correct some imprecisions in the handling of the headset’s position. These would show up as small coordinate mismatches between the optical tracking coordinates and the coordinates reported to VRED or Unity etc. Additionally, this would lead to some unnatural motion of AR overlays, especially when turning the head.

Optimal performance requires updating both Varjo Base to at least version 3.10 and LPVR to at least version 4.8.0.  Updating Varjo Base fixes the underlying issue, updating LPVR corrects the interfacing.  If you cannot update Varjo Base, you can still update LPVR-CAD-Varjo to version 4.8.0 and enable a workaround.  To do so, please open the Varjo Base configuration GUI on the System tab and then add patchPositionBug=true in the field labeled Additional Settings followed by clicking the “Submit” button. Note while this works around the issue in Varjo Base before version 3.10, it is not recommended to use this option with the updated versions of Varjo Base.

Varjo configuration refinements

Different environments call for different setups.  Some of our users use administrator accounts, others have multiple users but want them to use the same configuration.  We have updated the way we organize on-disc storage of the configuration to address these possibilities.  In particular you can now establish a system-wide configuration default, and you can override it per-user.  In the case of LPVR-CAD, additionally, the configuration is entered inside Varjo Base by default, but to allow users greater flexibility, it has always been possible to use our web interface or files on disk to perform the configuration.  While these are not the preferred choice, it was previously not possible to see from Varjo Base whether the on-disk configuration is in use.  We have added a prominent status information that points to the configuration, as in the screen shot below.  In the case of LPVR-DUO the configuration is always loaded from disk as the added flexibility of our configuration page is required,, but in LPVR-CAD the user will have to opt in. We describe the process briefly below.

The user can setup a global, systemwide default configuration in %ProgramData%/Varjo/VarjoTracking/Plugins/LP-Research/LPVR-CAD-Varjo/configurationsettings.json. Changes on the configuration page will not change this configuration, but will instead be written to the per-user configuration %LocalAppData%/LP-Research/LPVR-CAD-Varjo/settings.json. If either file is present, the configuration inside Varjo Base will be ignored. For LPVR-DUO, there is no configuration interface inside Varjo Base, instead the user will always point their web browser to http://localhost:7119. This configuration relies on the same files, but with the subdirectory LPVR-CAD replaced by LPVR-DUO.

LPVR-DUO demonstration

In order to familiarize you with the neighborhood of our office and, more importantly, to show what can be done with LPVR-DUO, here is an in-car mixed reality demonstration. The video screens on the glove box may look almost real but they are an overlay imposed on the see-through camera image of a Varjo XR-3 using an out-of-the-box LPVR-DUO set. Notice how the screens firmly remain in place during turns of the user’s head as well as turns of the car itself, even when diving into some of the steeper roads of the Motoazabu area in central Tokyo.

Large-scale VR Application Case: the Holodeck Control Center

The AUDI Holodeck

LPVR interaction

Our large-scale VR solution allows any SteamVR-based (e.g. Unity, Unreal, VRED) Virtual Reality software to seamlessly use the HTC VIVE headset together with most large-room tracking systems available on the market (OptiTrack, Vicon, ART). It enables easy configuration and fits into the SteamVR framework, minimizing the effort needed to port applications to large rooms.

One of our first users, Lightshape, have recently released a video showing what they built with our technology.  They call it the Holodeck Control Center, an application which creates multi-user collaborative VR spaces. In it users can communicate and see the same scene whether they are the same real room or in different locations. The installation showcased in the video is used by German car maker Audi to study cars that haven’t been built yet.

Our technology is essential in order to get the best VR experience possible on the 15m × 15m of the main VR surface, combining optical tracking data and IMU measurements to provide precise and responsive positioning of the headsets.  Please have a look at Lightshape’s video below.

Ready for the HTC Vive Pro

In the near future, this installation will be updated to the HTC Vive Pro which our software already supports. The increased pixel density of this successor of the HTC Vive will make the scenes look even more realistic. The resolution is high enough to actually read the various panels once you are in the drivers seat! Besides that, we are also busy studying applications of the front-facing cameras of the Vive Pro in order to improve multi-user interaction.

Optical-Inertial Sensor Fusion

Optical position tracking and inertial orientation tracking are well established measurement methods. Each of these methods has its specific advantages and disadvantages. In this post we show an opto-inertial sensor fusion algorithm that joins the capabilities of both to create a capable system for position and orientation tracking.

How It Works

The reliability of position and orientation data provided by an optical tracking system (outside-in or inside-out) can for some applications be compromised by occlusions and slow system reaction times. In such cases it makes sense to combine optical tracking data with information from an inertial measurement unit located on the device. Our optical-intertial sensor fusion algorithm implements this functionality for integration with an existing tracking system or for the development of a novel system for a specific application case.

The graphs below show two examples of how the signal from an optical positioning system can be improved using inertial measurements. Slow camera framerates or occasional drop-outs are compensated by information from the integrated inertial measurement unit, improving the overall tracking performance.

Combination of Several Optical Trackers

For a demonstration, we combined three NEXONAR IR trackers and an LPMS-B2 IMU, mounted together as a hand controller. The system allows position and orientation tracking of the controller with high reliability and accuracy. It combines the strong aspects of outside-in IR tracking with inertial tracking, improving the system’s reaction time and robustness against occlusions.

Optical-Inertial Tracking in VR

The tracking of virtual reality (VR) headsets is one important area of application for this method. To keep the user immersed in a virtual environment, high quality head tracking is essential. Using opto-inertial tracking technology, outside-in tracking as well as inside-out camera-only tracking can be significantly improved.

Machine Learning for Context Analysis

Deterministic Analysis vs. Machine Learning for Context Analysis

Machine learning for context analysis and artificial intelligence (AI) are important methods that allow computers to classify information about their environment. Today’s smart devices integrate an array of sensors that constantly measure and save data. On the first thought one would image that the more data is available, the easier it is to draw conlusions from this information. But, in fact larger amounts of data become harder to analyze using deterministic methods (e.g. thresholding). Whereas such methods by themselves can work efficiently, it is difficult to decide which analysis parameters to apply to which parts of the data.

Using machine learning techniques on the other hand this procedure of finding the right parameters can be greatly simplified. By teaching an algorithm which information corresponds to a certain outcome using training and verification data, analysis parameters can be determined automatically or at least semi-automatically. There exists a wide range of machine learning algorithms including the currently very popular convolutional neural networks.

Context analysis setup overview

Figure 1 – Overview of the complete analysis system with its various data sources

Context Analysis

Many health care applications rely on the correct classification of a user’s daily activities, as these reflect strongly his lifestyle and possibly involved health risks. One way of detecting human activity is monitoring their body motion using motion sensors such as our LPMS inertial measurement unit series. In the application described here we monitor a person’s mode of transportation, specifically

  1. Rest
  2. Walking
  3. Running
  4. In car
  5. On train

To illustrate the results for deterministic analysis vs. machine learning for context analysis approach we first implemented a state machine based on deterministic analysis parameters. An overview of the components of this system are shown in Figure 1.

Deterministic approach overview

Figure 2 – Deterministic approach

The result (Figure 2) is a relatively complicated state machine that needs to be very carefully tuned. This might have been because of our lack of patience, but in spite of our best efforts we were not able to reach detection accuracies of more than around 60%. Before spending a lot more time on manual tuning of this algorithm we switched to a machine learning approach.

Machine learning approach overview

Figure 3 – Machine learning approach

The eventual system structure shown in Figure 3 looks noticeably simpler than the deterministic state machine. Besides standard feature extraction, a central part of the algorithm is the data logging and training module. We sampled over 1 milion of training samples to generate the parameters for our detection network. As a a result, even though we used a relatively simple machine learning algorithm, we were able to reach a detection accuracy of more than 90%. A comparison between ground truth data and classification results from raw data is displayed in Figure 4.

Context analysis algorithm result

Figure 4 – Result graphs comparing ground truth and analysis output for ~1M data points

Conclusion

We strongly belief in the use of machine learning / AI techniques for sensor data classification. In combination with LP-RESEARCH sensor fusion algorithms, these methods add a further layer of insight for our data anlysis customers.

If this topic sounds familiar to you and you are looking for a solution to a related problem, contact us for further discussion.

Sensor Fusion for Virtual Reality Headset Tracking

In order to test the functionality of our sensor fusion algorithm for head-mounted-display pose estimation, we connected one of our IMUs (LPMS-CURS2), a Nexonar infrared (IR) beacon and a LCD display to a Baofeng headset. The high stability of the IR tracking and the orientation information from the IMU as input to the sensor fusion algorithm result in accurate, robust and reactive headtracking. See the figure below for details of the test setup. The video shows the resulting performance of the system.

1 2