New Features in LPVR Version 4.8

Introduction

Our LPVR series is the primary solution on the market for users who want to expand the scope of their virtual reality or mixed reality headsets by using external tracking systems such as ART, OptiTrack or Vicon. Use cases are varied and range from entertainment (location based VR) and engineering use cases (ergonomic studies in AR) to helicopters and virtual cars which are actually driving on Japan’s public roads. At LP-Research, we have continuously developed the LPVR series of solutions over the past years. We have expanded its scope, added support for new headsets, and included new functions.

The image below shows an LPVR installation based on design content created by automotive prototyping company Phiaro Inc. in Tokyo, Japan.

The latest release is version 4.8.0, which we released in June of 2023.  As usual, it comes in two flavors:

  • LPVR-CAD which supports stationary use-cases, and
  • LPVR-DUO which is our variant for moving platforms, be they cars or simulators.

We support all the major tethered headsets (SteamVR headsets, Pimax, Varjo).  We also support Meta Quest series headsets and the Vive Focus 3 with our LPVR-Air series of products. If you have a current support contract, you are eligible for an update.

A Brief Overview of LPVR-CAD and LPVR-DUO

It’s maybe best to summarize some of the capabilites that our products add to the various commercial headsets.  For more details, feel free to visit the product pages for LPVR-CAD and LPVR-DUO, respectively:

  • Cover arbitrary large areas and have VR scenes taking place in them
  • Have an arbitrary number of users interact in such a space
  • Do VR/AR inside a car or any other moving platform
  • Track your user to sub-millimeter precision together with any number of props with no perceivable latency
  • Use SteamVR controllers without the Lighthouses

We can do this because our proprietary sensor fusion algorithms allow us to combine the measurements of high-precision motion tracking camera systems with the measurements of the headset’s Inertial Measurement Unit (IMU). For the case of a moving platform, we can additionally incorporate data from an IMU installed on the platform to provide for a responsive, accurate performance also in those circumstances.

New Features

For a short overview of the changes in each version, please refer to our Release Notes. Here we will give some highlights and dig into some details. LPVR 4.8.0 is the result of continuous development in the half year or so since our previous releases.

New GUI Organization and Visual LPVR-DUO Configuration Interface

The most obvious change to users will be the reorganized GUI which streamlines the setup, completely doing away with the need to enter any JSON codes, while coming on a more cleanly organized surface. Especially for our LPVR-DUO users this means a vast simplification of the system.  We have maintained the old configuration interface as an option to guarantee compatibility with existing workflows, but we don’t think that users will have to resort to it. Please let us know if your experience is different. If your headset tracking body is already calibrated, you should now be able to setup LPVR-DUO with some five mouse clicks.

When you load up the configuration, it will look something like this. Note that you no longer are led to a JSON editor where you manually have to enter the configuration. Instead you are greeted by a friendly, informative GUI.

At the bottom of the page, you will see links to the Documentation, a Calibration screen, and an Expert Mode, basically the old JSON editor. The Calibration screen is used for the setup of the Platform IMU and simplifies it down to a few mouse clicks in the usual case. No more looking for some quaternion values in log files! Please check out the corresponding documentation.

Varjo Headset Eye Point Adjustments

Together with Varjo and with cooperation of several of our customers we were able to identify and correct some imprecisions in the handling of the headset’s position. These would show up as small coordinate mismatches between the optical tracking coordinates and the coordinates reported to VRED or Unity etc. Additionally, this would lead to some unnatural motion of AR overlays, especially when turning the head.

Optimal performance requires updating both Varjo Base to at least version 3.10 and LPVR to at least version 4.8.0.  Updating Varjo Base fixes the underlying issue, updating LPVR corrects the interfacing.  If you cannot update Varjo Base, you can still update LPVR-CAD-Varjo to version 4.8.0 and enable a workaround.  To do so, please open the Varjo Base configuration GUI on the System tab and then add patchPositionBug=true in the field labeled Additional Settings followed by clicking the “Submit” button. Note while this works around the issue in Varjo Base before version 3.10, it is not recommended to use this option with the updated versions of Varjo Base.

Varjo Configuration Refinements

Different environments call for different setups.  Some of our users use administrator accounts, others have multiple users but want them to use the same configuration.  We have updated the way we organize on-disc storage of the configuration to address these possibilities.  In particular you can now establish a system-wide configuration default, and you can override it per-user.  In the case of LPVR-CAD, additionally, the configuration is entered inside Varjo Base by default, but to allow users greater flexibility, it has always been possible to use our web interface or files on disk to perform the configuration.  While these are not the preferred choice, it was previously not possible to see from Varjo Base whether the on-disk configuration is in use.  We have added a prominent status information that points to the configuration, as in the screen shot below.  In the case of LPVR-DUO the configuration is always loaded from disk as the added flexibility of our configuration page is required, but in LPVR-CAD the user will have to opt in. We describe the process briefly below.

The user can prepare a global, system-wide default configuration in %ProgramData%/Varjo/VarjoTracking/Plugins/LP-Research/LPVR-CAD-Varjo/configuration/settings.json. Changes on the configuration page will not change this configuration, but will instead be written to the per-user configuration %LocalAppData%/LP-Research/LPVR-CAD-Varjo/settings.json. If either file is present, the configuration inside Varjo Base will be ignored and the user can use their web browser to configure LPVR-CAD. In this way, an administrator can prepare a configuration that works with the setup, and any user can customize it to their needs. For LPVR-DUO, there is no configuration interface inside Varjo Base, instead the user will always point their web browser to http://localhost:7119. Here, a system-wide default configuration can be placed in %ProgramData%/Varjo/VarjoTracking/Plugins/LP-Research/LPVR-DUO-Varjo/configuration/settings.json, and a per-user override can sit in %LocalAppData%/LP-Research/LPVR-DUO-Varjo/settings.json. The web interface will always update this per-user file.

LPVR-DUO Demonstration

In order to familiarize you with the neighborhood of our office and, more importantly, to show what can be done with LPVR-DUO, here is an in-car mixed reality demonstration. The video screens on the glove box may look almost real but they are an overlay imposed on the see-through camera image of a Varjo XR-3 using an out-of-the-box LPVR-DUO set. Notice how the screens firmly remain in place during turns of the user’s head as well as turns of the car itself, even when diving into some of the steeper roads of the Motoazabu area in central Tokyo.

How to Connect an LP-Research IMU to ROS (Update)

Introduction

This article describes how to connect an LP-RESEARCH inertial measurement unit (IMU) using a Robot Operating System (ROS) node. We are happy to announce that our IMU ROS sensor driver has been accepted into the official ROS package repository. The Robot Operating System, or ROS in short, is an open-source de-facto standard for robotics sensing and control.

With the package openzen_sensor now provided as part of the ROS distribution Melodic Morenia it just became a whole lot easier to use our sensors in robotic applications.

Note: This article covers our node for ROS 1. Please see further information regarding our ROS 2 node at the end of this article. This post is a follow-up to our previous ROS driver release.

Published ROS Topics

These are the ROS topics which are published by the OpenZen ROS driver:

Message

Type

Description

/imu/data

Inertial data from the IMU. Includes calibrated acceleration, calibrated angular rates and orientation. The orientation is always unit quaternion.

/imu/mag

Magnetometer reading from the sensor.

/imu/nav

Global position from a satellite navigation system. Only available if the IMU includes a GNSS chip.

/imu/is_autocalibration_active

Latched topic indicating if the gyro autocalibration feature is active.

Installation of the LPMS ROS Driver

All that’s needed is to install the package openzen_sensor via your Linux distribution’s package manager. In Ubuntu, with the ROS Melodic Morenia distribution installed, use the following command:

Once the IMU ROS driver package is installed, we use the following command to start the OpenZen node:

This will automatically connect to the first available IMU and start streaming its accelerometer, gyroscope and magnetometer data to ROS. If your sensor is equipped with a GPS unit, global positioning information will also be transferred to ROS.

Once a sensor has been connected via the motion sensor driver, the data from the sensor is exported via ROS topics which can be consumed by other ROS components such as a navigation and path planning system.

Outputting IMU sensor values on the command line can now be easily done with:

and the data can be plotted with:

More information on the usage of the OpenZen IMU ROS driver can be found in the repository of the driver.

The image above shows an angular velocity output graph in the ROS MatPlot application from an LPMS-IG1 sensor.

ROS 2 Release

We have recently released a ROS 2 version of our OpenZEN ROS node. The node is not part of an official ROS2 release yet, but it works well on the latest release Foxy. For surther information and source code see the OpenZenROS2 repository.

LPNAV – Outdoor Operation and 2D Map Building for Automatic Guided Vehicles (AGV)

LPNAV for Flexible, Rapid AGV Deployment

LPNAV enables automatic guided vehicles (AGV) to rapidly understand their environment and be ready for safe and efficient operation; no calibration, manual map building etc. is required.

With the help of LPNAV, mobile logistics platforms can operate (localize) in both, indoor and outdoor environments with the same set of sensors (Figure 1) and a unified map, e.g. when transporting an item from inside a warehouse to a truck parked in front of the warehouse. This offers a big cost-saving potential for applications in which so far the transition from indoor to outdoor settings required specialized equipment or manual handling.

Outdoor Localization

In a previous post we have shown the capability of LPNAV to operate in a small, crowded indoor environment. After further optimization of the algorithm we are now able to show the system working well in outdoor settings. Uncontrolled outdoor environments are particularly challenging as lighting conditions can vary very strongly and perception can be disturbed by pedestrians, passing cars etc.

In the video above we show the following capabilities of the system:

  • LPNAV is able to build a 3D map of its environment and localizes itself in real-time relative to its starting position.
  • Previously acquired map data can be used for localization. The map is automatically updated to environment changes.
  • When manually placed at a deliberate location on the map, an LPNAV-powered robot can instantly re-localize.
  • The system is robust to camera occlusions. Sensor fusion with IMU and odometry allows temporary operation without visual features.

Figure 1 – LPNAV combines information from visual SLAM (camera), inertial measurement unit (IMU), distance sensors (lidar, IR) and wheel encoder data to calculate low-latency, high-accuracy localization results. The robot maps its environment to both, a 3D point cloud map and a human-readable 2D obstacle map.

2D Real-time Map Building

The 3D point-cloud maps built by LPNAV’s visual SLAM work well for computational localization of the robot inside its environment, but they are hard to intuitively understand by humans. Therefore we added a feature to LPNAV that allows building 2D maps of the robot environment based on information from its IR distance sensors. To achieve 2D wall / obstacle mapping at larger distances, alternatively to the IR sensors a 2D Lidar can be used.

The 2D real-time mapping capability is demonstrated in the video below:

LPNAV-VAC – Cost-Efficient Navigation System for AGVs

Introduction

We’re proud to announce a breakthrough result in the development of our LPNAV low-cost navigation system for small-sized automatic guided vehicles (AGV).

One focus area of LPNAV are vacuum cleaning robots that require spatial understanding of their environment to calculate an optimum cleaning strategy. As vacuum cleaning robots are mainly consumer devices, solutions for this market need to be cost-efficient, while maintaining state-of-the-art performance.

Figure 1 – The LPNAV-VAC development kit contains a robot platform, a dedicated computing unit, an IMU sensor and a camera

Development Platform

LPNAV-VAC combines three different data sources in order to calculate a robot’s position inside a room: an inertial measurement unit, data from the robot’s wheel encoders and video images from a camera installed on the robot (Figure 1). A central computing unit combines the information from these data sources to simultaneously create a map of the surroundings of the robot and calculate the position of the robot inside the room.

It is essential that sensor fusion algorithm is able to dynamically update the map it is constructing. As new sensor information arrives the map is continuously adapted to reflect an optimized view the robot’s environment.

While this principle of simultaneous localization and mapping (SLAM) is an established method for some robot navigation systems, these solutions tend to rely on laser scanners (LIDAR) or vision-only reconstruction. The combination of all available data sources in the robot allows LPNAV-VAC to create high definition maps of the environment while using low-cost, off-the-shelf components.

First Demonstration

In the demonstration video above my colleague and main developer of LPNAV-VAC is steering our AGV platform through the ground floor of our Tokyo office. While the right side of the screen shows the view from the robot camera and detected visual features, the right side shows the path of the robot through the environment. As the robot progresses through the room a 3D map is created and continuously updated.

Please note that the robot doesn’t lose tracking during turns, while driving over small steps in the room or with changing environment lighting. Also Thomas moving around in front of the camera doesn’t disturb the LPNAV algorithm.

Using this map and the robot’s position information a path planning algorithm can find an optimum path for the robot to efficiently clean the room.

See-through Display First Look – LPVIZ (Part 3)

Virtual Dashboard Demonstration

This is a follow-up post to the introduction of our in-vehicle AR head mounted display LPVIZ part 1 and part 2.

To test LPVIZ we created a simple demo scenario of an automotive virtual dashboard. We created a Unity scene with graphic elements commonly found on a vehicle dashboard. We animated these elements to make the scene look more realistic.

This setup is meant for static testing at our shop. For further experiments inside a moving vehicle we are planning to connect the animated elements directly to car data (speed etc.) communicated over the CAN bus.

The virtual dashboard is only a very simple example to show the basic functionality of LPVIZ. As described in a previous post, many a lot more sophisticated applications can be implemented.

The video above was taken through the right eye optical waveguide display of LPVIZ. We took this photo with a regular smartphone camera and therefore it is not very high quality. Nevertheless, it confirms that the display is working and correctly shows the virtual dashboard.

The user is looking at the object straight ahead. In case the user rotates his head or changes position, his view of the object will change perspectively. An important point to mention is the high luminosity of the display. We took this photo with the interior lighting in our shop turned on normally, and without any additional shade in front of the display.

1 2 3 4 7