We can provide software for real-time interpretation of the large size Velodyne HD LIDAR point cloud data. This includes point cloud based object detection and environment modeling for safe robot navigation or intelligent driver assistance systems.
We also have a strong focus on machine vision with applications including fast GPU-based robust point feature extraction, road detection for marked and unmarked roads as well as visual tracking and state estimation of objects in full 3D.
To benefit from the complementary information provided by vision and depth sensors we also offer solutions for sychronization, calibration and fusion of the color image data and the 3D Velodyne point clouds. Fusion can proceed at different levels: at the raw data (colored 3D points, depth-annotated pixels in color images), at the feature level (colored environment model) or at the object level by mixing the estimates of objects tracked in both camera images and point clouds.
All our methods have been succesfully applied to different autonomous robot platforms in challenging urban and offroad scenarios.