About Tobias Schlüter

Before I came to LP-Research, I used to do particle physics research. In my former career I studied hadrons (i.e. particles composed from quarks) at various particle accelerator experiments, results can be found here. At LP-Research, I program FPGAs, GPUs, MCUs and CPUs to process data coming from sources as varied as IMUs, video cameras and heart-rate monitors. I can assure you that particle physics is a perfectly valid qualification for this kind of work. Trust me, I'm a scientist.

New Features in LPVR Version 4.8

Introduction

Our LPVR series is the primary solution on the market for users who want to expand the scope of their virtual reality or mixed reality headsets by using external tracking systems such as ART, OptiTrack or Vicon. Use cases are varied and range from entertainment (location based VR) and engineering use cases (ergonomic studies in AR) to helicopters and virtual cars which are actually driving on Japan’s public roads. At LP-Research, we have continuously developed the LPVR series of solutions over the past years. We have expanded its scope, added support for new headsets, and included new functions.

The image below shows an LPVR installation based on design content created by automotive prototyping company Phiaro Inc. in Tokyo, Japan.

The latest release is version 4.8.0, which we released in June of 2023.  As usual, it comes in two flavors:

  • LPVR-CAD which supports stationary use-cases, and
  • LPVR-DUO which is our variant for moving platforms, be they cars or simulators.

We support all the major tethered headsets (SteamVR headsets, Pimax, Varjo).  We also support Meta Quest series headsets and the Vive Focus 3 with our LPVR-Air series of products. If you have a current support contract, you are eligible for an update.

A brief overview of LPVR-CAD and LPVR-DUO

It’s maybe best to summarize some of the capabilites that our products add to the various commercial headsets.  For more details, feel free to visit the product pages for LPVR-CAD and LPVR-DUO, respectively:

  • Cover arbitrary large areas and have VR scenes taking place in them
  • Have an arbitrary number of users interact in such a space
  • Do VR/AR inside a car or any other moving platform
  • Track your user to sub-millimeter precision together with any number of props with no perceivable latency
  • Use SteamVR controllers without the Lighthouses

We can do this because our proprietary sensor fusion algorithms allow us to combine the measurements of high-precision motion tracking camera systems with the measurements of the headset’s Inertial Measurement Unit (IMU). For the case of a moving platform, we can additionally incorporate data from an IMU installed on the platform to provide for a responsive, accurate performance also in those circumstances.

New Features

For a short overview of the changes in each version, please refer to our Release Notes. Here we will give some highlights and dig into some details. LPVR 4.8.0 is the result of continuous development in the half year or so since our previous releases.

New GUI organization and completely graphical LPVR-DUO configuration

The most obvious change to users will be the reorganized GUI which streamlines the setup, completely doing away with the need to enter any JSON codes, while coming on a more cleanly organized surface. Especially for our LPVR-DUO users this means a vast simplification of the system.  We have maintained the old configuration interface as an option to guarantee compatibility with existing workflows, but we don’t think that users will have to resort to it. Please let us know if your experience is different. If your headset tracking body is already calibrated, you should now be able to setup LPVR-DUO with some five mouse clicks.

When you load up the configuration, it will look something like this. Note that you no longer are led to a JSON editor where you manually have to enter the configuration. Instead you are greeted by a friendly, informative GUI.

At the bottom of the page, you will see links to the Documentation, a Calibration screen, and an Expert Mode, basically the old JSON editor. The Calibration screen is used for the setup of the Platform IMU and simplifies it down to a few mouse clicks in the usual case. No more looking for some quaternion values in log files! Please check out the corresponding documentation.

Varjo headset eye point adjustments

Together with Varjo and with cooperation of several of our customers we were able to identify and correct some imprecisions in the handling of the headset’s position. These would show up as small coordinate mismatches between the optical tracking coordinates and the coordinates reported to VRED or Unity etc. Additionally, this would lead to some unnatural motion of AR overlays, especially when turning the head.

Optimal performance requires updating both Varjo Base to at least version 3.10 and LPVR to at least version 4.8.0.  Updating Varjo Base fixes the underlying issue, updating LPVR corrects the interfacing.  If you cannot update Varjo Base, you can still update LPVR-CAD-Varjo to version 4.8.0 and enable a workaround.  To do so, please open the Varjo Base configuration GUI on the System tab and then add patchPositionBug=true in the field labeled Additional Settings followed by clicking the “Submit” button. Note while this works around the issue in Varjo Base before version 3.10, it is not recommended to use this option with the updated versions of Varjo Base.

Varjo configuration refinements

Different environments call for different setups.  Some of our users use administrator accounts, others have multiple users but want them to use the same configuration.  We have updated the way we organize on-disc storage of the configuration to address these possibilities.  In particular you can now establish a system-wide configuration default, and you can override it per-user.  In the case of LPVR-CAD, additionally, the configuration is entered inside Varjo Base by default, but to allow users greater flexibility, it has always been possible to use our web interface or files on disk to perform the configuration.  While these are not the preferred choice, it was previously not possible to see from Varjo Base whether the on-disk configuration is in use.  We have added a prominent status information that points to the configuration, as in the screen shot below.  In the case of LPVR-DUO the configuration is always loaded from disk as the added flexibility of our configuration page is required,, but in LPVR-CAD the user will have to opt in. We describe the process briefly below.

The user can setup a global, systemwide default configuration in %ProgramData%/Varjo/VarjoTracking/Plugins/LP-Research/LPVR-CAD-Varjo/configurationsettings.json. Changes on the configuration page will not change this configuration, but will instead be written to the per-user configuration %LocalAppData%/LP-Research/LPVR-CAD-Varjo/settings.json. If either file is present, the configuration inside Varjo Base will be ignored. For LPVR-DUO, there is no configuration interface inside Varjo Base, instead the user will always point their web browser to http://localhost:7119. This configuration relies on the same files, but with the subdirectory LPVR-CAD replaced by LPVR-DUO.

LPVR-DUO demonstration

In order to familiarize you with the neighborhood of our office and, more importantly, to show what can be done with LPVR-DUO, here is an in-car mixed reality demonstration. The video screens on the glove box may look almost real but they are an overlay imposed on the see-through camera image of a Varjo XR-3 using an out-of-the-box LPVR-DUO set. Notice how the screens firmly remain in place during turns of the user’s head as well as turns of the car itself, even when diving into some of the steeper roads of the Motoazabu area in central Tokyo.

Large-scale VR Application Case: the Holodeck Control Center

The AUDI Holodeck

LPVR interaction

Our large-scale VR solution allows any SteamVR-based (e.g. Unity, Unreal, VRED) Virtual Reality software to seamlessly use the HTC VIVE headset together with most large-room tracking systems available on the market (OptiTrack, Vicon, ART). It enables easy configuration and fits into the SteamVR framework, minimizing the effort needed to port applications to large rooms.

One of our first users, Lightshape, have recently released a video showing what they built with our technology.  They call it the Holodeck Control Center, an application which creates multi-user collaborative VR spaces. In it users can communicate and see the same scene whether they are the same real room or in different locations. The installation showcased in the video is used by German car maker Audi to study cars that haven’t been built yet.

Our technology is essential in order to get the best VR experience possible on the 15m × 15m of the main VR surface, combining optical tracking data and IMU measurements to provide precise and responsive positioning of the headsets.  Please have a look at Lightshape’s video below.

Ready for the HTC Vive Pro

In the near future, this installation will be updated to the HTC Vive Pro which our software already supports. The increased pixel density of this successor of the HTC Vive will make the scenes look even more realistic. The resolution is high enough to actually read the various panels once you are in the drivers seat! Besides that, we are also busy studying applications of the front-facing cameras of the Vive Pro in order to improve multi-user interaction.

Location-based VR Tracking for All SteamVR Applications

LPVR Pipeline Overview

UPDATE 1 – LPVR now offers VIVE Pro support!

UPDATE 2 – LPVR can now talk to all optical tracking systems that support VRPN (VICON, ART etc.).

UPDATE 3 – Of special interest to automotive customers may be that this also supports Autodesk VRED.

Current VR products cannot serve several important markets due to limitations of their tracking systems.  Out of the box, both the HTC Vive and the Oculus Rift are limited to tracking areas smaller than 5m x 5m, which is too small for most multi-player applications. We have previously presented our solution that combines our motion sensing IMU technology, OptiTrack camera-based tracking and the HTC Vive to allow responsive multiplayer VR experiences over larger areas. Because of the necessary interfacing this means that applications still need to be prepared specifically for this solution.

We have now further improved our software stack such that we can provide a SteamVR driver for our solution. What this means is that on the one hand any existing SteamVR application automatically supports the arbitrarily large tracking areas covered by the OptiTrack system. On the other hand it means that no additional plugins for Unity, Unreal or your development platform of choice is needed — support is automatic. Responsive behavior is guaranteed by using LP-RESEARCH’s IMU technology in combination with standard low-latency VR technologies like asynchronous time warping, late latching etc. An overview of the functionality of the system is shown in the image above.

LPMS-NAV Navigation Sensors for Mobile Robots and Automated Guided Vehicles (AGV)

We are proud to announce the release of a new series of high-precision sensors for applications in autonomous vehicle navigation. The sensors are based on quartz-vibration gyroscopes with low-noise, low-drift characteristics. They have excellent capabilities for measurement of slow to medium speed rotations.

We offer the new sensors in various versions with different communication interfaces and housing options: LPMS-NAV2, LPMS-NAV2-RS232 and LPMS-NAV2-RS422. Please check more detailed information on our products page

The following video shows a use-case of one of our customers in China. The company is using LPMS-NAV2-RS232 sensor for mobile robot navigation. Automatic navigation of automated guided vehicles (AGV) or cleaning robots are two of the principal application areas of the LPMS-NAV series.

If you have any interest in this product, please contact us for further information.

Robot Operating System and LP-Research IMUs? Simple!

NOTE: We have released a new version of our ROS / ROS 2 driver, please refer to this post.


Introduction

Robot Operating System (ROS) is a tool commonly used in the robotics community to pass data between various subsystems of a robot setup. We at LP-Research are also using it in various projects, and it is actually very familiar to our founders from the time of their PhDs. Inertial Measurement Units are not only a standard tool in robotics, the modern MEMS devices that we are using in our LPMS product line are actually the result of robotics research. So it seemed kind of odd that an important application case for our IMUs was not covered by our LpSensor software: namely, we didn’t provide a ROS driver.  We are very happy to tell you that such a driver exists, and we are happy that we don’t have to write it ourselves: the Larics laboratory at the University of Zagreb are avid users of both ROS and our LPMS-U2 sensors. So, naturally, they developed a ROS driver which they provide on their github site.  Recently, I had a chance to play with it, and the purpose of this blog post is to share my experiences with you, in order to get you started with ROS and LPMS sensors on your Ubuntu Linux system.

Installing the LpSensor Library

Please check our download page for the latest version of the library, at the time of this writing it is 1.3.5. I downloaded it, and then followed these steps to unpack and install it:

I also installed libbluettoth-dev, because without Bluetooth support, my LPMS-B2 would be fairly useless.

Setting up ROS and a catkin Work Space

If you don’t already have a working ROS installation, follow the ROS Installation Instructions to get started. If you already have a catkin work space you can of course skip this step, and substitute your own in what follows.  The work space is created as follows, note that you run catkin_init_workspace inside the src sub-directory of your work space.

Downloading and Compiling the ROS Driver for LPMS IMUs

We can now download the driver sources from github. It optionally makes use of and additional ROS module by the Larics laboratory which synchronizes time stamps between ROS and the IMU data stream.  Therefore, we have to clone two git repositories to obtain all prerequisites for building the driver.

That’s it, we are now ready to run catkin_make to get everything compiled and ready.  Building was as simple as running catkin_make, but you should setup the ROS environment before that.  If you haven’t, here’s how to do that:

This should go smoothly. Time for a test.

Not as Cool as LpmsControl, but Very Cool!

Now that we are set up, we can harness all of the power and flexibility of ROS. I’ll simply show you how to visualize the data using standard ROS tools without any further programming.  You will need two virtual terminals.  In the first start roscore, if you don’t have it running yet.  In the second, we start rqt_plot in order to see the data from our IMU, and the lpms_imu_node which provides it.  In the box you can see the command I use to connect to my IMU. You will have to replace the _sensor_model and _port strings with the values corresponding to your device.  Maybe it’s worth pointing out that the second parameter is called _port, because for a USB device it would correspond to its virtual serial port (typically /dev/ttyUSB0).

Once you enter these commands, you will then see the familiar startup messages of LpSensor as in the screenshot below. As you can see the driver connected to my LPMS-B2 IMU right away. If you cannot connect, maybe Bluetooth is turned off or you didn’t enter the information needed to connect to your IMU.  Once you have verified the parameters, you can store them in your launch file or adapt the source code accordingly.

Screenshot starting LPMS ROS node

Screenshot of starting the LPMS ROS node

The lpms_imu_node uses the standard IMU and magnetic field message types provided by ROS, and it publishes them on the imu topic.  That’s all we need to actually visualize the data in realtime.  Below you can see how easy that is in rqt_plot. Not as cool as LpmsControl, but still fairly cool. Can you guess how I moved my IMU?

animation of how to display LPMS sensor data in ROS

Please get in touch with us, if you have any questions, or if you found this useful for your own projects.

Update: Martin Günther from the German Research Center for Artificial Intelligence was kind enough to teach me how to pass ROS parameters on the command line.  I’ve updated the post accordingly.