Vision-Based Inertial Navigation
A high-precision vision-based inertial navigation software package is being developed to enhance the performance of an Inertial Navigation System (INS), or to allow navigation in GPS denied environments. Traditionally an INS consists of an Inertial Measurement Unit (IMU) measuring body-frame accelerations and rotation rates, and a GPS to measure inertial position and velocity in an Earth-fixed coordinate frame. A Kalman filter combines measurements from these various sensors and keeps a time history of the inertial position and orientation of the vehicle. The GPS measurements allow the filter to estimate the time varying biases present in the IMU data. Ultimately the precision of the navigation solution is determined by the measurement noise of the IMU, and therefore if a high quality estimate of inertial pose is required, one must spend a large amount of money on a very high quality IMU. Additionally, if the GPS signal is lost, the IMU biases are no longer able to be estimated and the inertial navigation solution will accumulate bias error until GPS is reacquired.
Using a camera in the vehicle to observe and track features in the environment can alleviate these problems. A high-resolution camera alongside a cheap IMU can provide the same or higher quality navigation solution as a much more expensive IMU. Additionally, if the inertial-frame coordinates of some of the observed features in the environment are known, navigation can continue with high fidelity even without GPS.
The LASR vision-based inertial navigation software is under development with the goal of providing a high-fidelity navigation solution to enable aerial mapping with the HD6D LIDAR sensor. The software will be used in combination with a VectorNav VN-200 INS. Initial flight tests have occurred in a Cessna O-2A Skymaster owned by the Texas A&M Flight Research Lab (FRL).
Computational Vision Pipeline
To solve the Simultaneous Localization and Mapping (SLAM) problem is to calculate one’s own six degree-of-freedom motion with respect to an unknown scene, and to simultaneously generate a three-dimensional map of the scene. LASR_CV is a computational vision pipeline for solving the SLAM problem in real time. A modular and extensible framework, LASR_CV is designed for rapid-prototyping of algorithms and sensors for estimation and computer vision. LASR_CV consists of several modules operating in parallel to generate frame-rate pose estimates and geometric models. This modular architecture decouples research topics of interest from the SLAM problem as a whole, enabling developers and researchers to test their software or hardware easily. Each module has “hooks” into the internal data to enable algorithmic tuning or report generation. When combined with inertial measurements, detailed error studies of individual sensors or algorithms can be performed.
Object Recognition
Custom object recognition algorithms have been developed to calculate real time estimates of the relative pose of target objects observed using vision-based sensors. This pose information can be used for proximity navigation in the vicinity of a known object. The software utilizes an iterative least squares solution to estimate the relative pose of the target with respect to the sensor by comparing measured geometric point clouds with a known geometric model of the target. The software is modular, and development has been largely application driven. Target objects can be modeled as a function using three-dimensional parametric surfaces, or empirically as a point cloud or wireframe model.
A recent implementation is being used to provide five degree of freedom relative pose estimates of a mock upper stage rocket nozzle for ground based simulation of space debris mitigation missions. The rocket nozzle is modeled as a parametric surface of revolution, thus allowing rapid convergence to a pose estimate.
Vision-Based Sensors
Passive Stereo
Using the principle of triangulation between two cameras separated by a fixed baseline distance, or by images from the same camera displaced by motion through space, stereo correspondence matches of the same points yield a 3-dimensional reconstruction of a scene. These 3D reconstructions may be used as an input to the TAMU-CV Computer Vision Pipeline, or as standalone tools for robotic situational awareness or path planning.
LASR Lab has extensively studied the strengths and weaknesses of stereo based mapping for space applications. One of the largest drawbacks of stereo correspondence matching is computational expense. To address this issue, a novel dense stereo algorithm for correspondence matching in the Fourier Frequency Domain, the output of which is high-fidelity local 3D models at frame rate, has been implemented in a parallel computing environment aboard the Graphics Processing Unit (GPU) using NVIDIA’s Compute Unified Device Architecture (CUDA).
Active Stereo
Active stereo operates by projecting a known pseudo-random infrared pattern onto the scene and then observing how that pattern registers with an infrared camera. Positioned at a known baseline distance, the camera/projector pair allows for computing depth data via stereo triangulation. A color camera provides native surface texture for the resulting three-dimensional geometric point cloud. Active stereo addresses many of the failure modes of traditional passive stereo, and is particularly attractive for space applications due to its low cost and small size. LASR Lab uses the Microsoft Kinect and the Asus Xtion passive stereo sensors.
HD6D
LASR Lab, in collaboration with Systems and Processes Engineering Corporation (SPEC) is developing a novel LIDAR sensor for space applications. The sensor is capable of delivering high-definition full color point clouds at ranges of 1m to several km. Designed around a rotating prism system to distribute the outgoing laser light, and parallel data receivers to process returns, the sensor is capable of 12 million independent range measurements per second. The eye-safe system combines the high data rate and large field of view of a flash LIDAR with the accuracy and range of a scanning LIDAR into a single compact package.