Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning

Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning

We develop a new state-of-the-art deep learning strategy for prescribing vantage points for optimal sensor placement. This approach uses a robust volumetric visibility computation to efficiently model arbitrary geometries. We present several simulations on urban environments below.

2D Urban Simulations

A demonstration of two different policies for generating vantage points using deep learning, set over an aerial view of a 150m x 150m area in Austin. The agent starts in an initially unknown environment. At prescribed vantage point, it takes an omni-directional sensor measurement encoding line-of-sight information. Red dot indicates current position. Blue disks are previous vantage points. White regions are visible at the current positon. Light gray regions were visible at previous vantage points. Dark gray regions are currently occluded. Black lines indicate boundaries of reconstructed buildings.

3D Urban Simulations

A 3D simulation of a 250m x 250m environment based on Castle Square Parks in Boston.
Video demonstrating the exploration of an initially unknown environment using sparse sensor measurements. The green spheres indicate the vantage point. The gray surface is the reconstruction of the environment based on line of sight measurements taken from the sequence of vantage points. New vantage points are computed in virtually real time using our new deep learning strategy. Best viewed in full screen.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
Surveillance of urban environment using 2 sensors represented by green spheres. Yellow regions are visible from one of the sensors. Red regions are visible from both sensors. For clarity, visualization only includes visibility of regions near ground level. Two sample paths across the courtyard are shown. The cyan path uses the shadows of structure to minimize detection while the magenta path naively crosses through.

3D MOUT Site

Realistic LiDAR simulations of a virtual Military Operations in Urban Terrain (MOUT) site. Vantage points are generated using a previous approach.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
A 20m region around a patio area with tree cover from the virtual MOUT site.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
The integrated visibility volume of the patio area generated from 16 vantage points, shown as red circles.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
A close-up of the patio area with complex geometries such as columns and tree trunks.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
A coarse reconstruction of the patio generated from the visibility volume. Despite the low resolution, the topology of the structures are preserved.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
A larger, 50m region around a church with several buildings.
Autonomous Exploration, Reconstruction, and Surveillance Aided by Deep Learning
A reconstruction of the building surface using 1,896,786 points generated from 11 vantage locations.

Publications

  • L. Ly and R. Tsai. Autonomous exploration, reconstruction, and surveillance of 3D environments aided by deep learning. 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019.
  • M. Hielsberg, R. Tsai, P. Guo, and C. Chen. Visibility-based urban exploration and learning using point clouds. 2013.
  • L. Valente, R. Tsai, and S. Soatto. Information-seeking control under visibility-based uncertainty. Journal of Mathematical Imaging and Vision, 48(2), 339-358, 2014
  • R. Takei, R. Tsai, Z. Zhou, and Y. Landa. An efficient algorithm for a visibility-based surveillance-evasion game. Communications in Mathematical Sciences, Vol (12) No 7, pp.1303-1327, 2014
  • Y. Landa and R. Tsai. Visibility of point clouds and exploratory path planning in unknown environments. Communications in Mathematical Sciences., 6(4), 2008.
  • C.-Y. Kao and R. Tsai. Properties of a level set algorithm for the visibility problems. Journal of Scientific Computing, 35(2-3), June 2008.
  • Y. Landa, D. Galkowski, Y. Huang, A. Joshi, C. Lee, K. Leung, G. Malla, J. Treanor, V. Voroninski, A. Bertozzi, and Y.-H. Tsai. Robotic path planning and visibility with limited sensor data. American Control Conference, 2007. ACC ’07, pages 5425–5430, July 2007.
  • Y. Landa, R. Tsai, and L. Cheng. Visibility of point clouds and mapping of unknown environments. In Springer Notes in Computational Science and Engineering, pages 1014–1025, 2006.
  • L.-T. Cheng and Y.-H. Tsai. Visibility optimization using variational approaches. Commun. Math. Sci., 3(3):425–451, 2005
  • Y.-H. R. Tsai, L.-T. Cheng, S. Osher, P. Burchard, and G. Sapiro. Visibility and its dynamics in a PDE based implicit framework. J. Comput. Phys., 199(1):260–290, 2004.
  • H. Jin, A. J. Yezzi, Y.-H. Tsai, L.-T. Cheng, and S. Soatto. Estimation of 3D surface shape and smooth radiance from 2D images: a level set approach. J. Sci. Comput., 19(1-3):267–292, 2003.
  • </p> </d-article>