We can provide software for real-time interpretation of the large size LiDAR point cloud data. This includes point cloud based object detection and environment modeling for safe robot navigation or intelligent driver assistance systems.

Object tracking and state etimation with subsequent classification of multiple traffic participants (green). This includes a model of the local ground plane (light green points)

Generation of a dense 3D environment model from fusion of 3D LiDAR point clouds and images from a color camera. The slope of the terrain is coded by colors ranging from dark red (small slope) to bright yellow (large slope).

We also have a strong focus on machine vision with applications including fast GPU-based robust point feature extraction, road detection for marked and unmarked roads as well as visual tracking and state estimation of objects in full 3D.

Visual detection of rural roads under bad lighting conditions.

Obstacle avoiding autonomous navigation based on the multi-layer environment model generated from LIDAR and camera data. Shown are some feasible driving primitives (colored trajectories) that get analyzed for drivability in all different layers of the environment model (shown is the layer representing vegetation probabilities in gray scale).

To benefit from the complementary information provided by vision and depth sensors we also offer solutions for sychronization, calibration and fusion of the color image data and the 3D LiDAR point clouds. Fusion can proceed at different levels: at the raw data (colored 3D points, depth-annotated pixels in color images), at the feature level (colored environment model) or at the object level by mixing the estimates of objects tracked in both camera images and point clouds.

All our methods have been succesfully applied to different autonomous robot platforms in challenging urban and offroad scenarios.