Projects

Online Perception-Aware Path Planning

Vision-based localization systems rely on highly-textured areas for achieving an accurate pose estimation. This path planner exploits the scene’s visual appearance (photometric information and texture) in combination with its 3D geometry to navigate in a previously unknown environment. The optimal plan is updated on-the-fly, as new visual information is gathered and significantly reduces pose uncertainty over trajectories that do not consider the perception of the robot.

 

Unmanned Port Security Vehicle (UPSV)

The Unmanned Port Security Vehicle (UPSV) is a low-cost, easily deployable, sensor platform designed for first response and inspection scenarios in shallow water and port environments. This system was developed by the Field Robotics Laboratory at the University of Hawai`i at Manoa.

 

two_chairs_map

Semantic Object Modeling

Modeling of arbitrary 3D object from many partial observations.  Vision-based object detection and automatic foreground segmentation are applied to RGB-D imagery to extract 3D point cloud data corresponding to the object. Multiple such observations are collected and filtered, resulting in a point cloud model for the object.

 

bldg1_front_model2Stairway Modeling

Discovering stairways in the environment and assessing their traversability is important for path planning for stair-climbing robots. RGB-D step edge detections are combined and a generative model is fit to them, enabling the estimation of step dimensions and stairway properties.

 

bremen.pngEdge Voxels

Volumetric edge extraction from point cloud data as an analogue of visual edge detection in images. We apply a 3D structure tensor operation to voxelized spatial data and isolate the voxels corresponding to physical edges in the environment.

 

facadesBuilding Facade Modeling

A top-down method for estimating planar models for building facades in single-view stereo imagery. The dominant planar facades in the scene can be modeled by iteratively fitting a generative model with RANSAC to the stereo data in disparity space and then using a Markov Random Field to label each pixel to one candidate plane (or as background).