Space Missions Engineering Laboratory

Autonomous Navigation

Cameras are quite common sensors used in space applications as payload on board satellites, which allows to take pictures of Earth or other planets. In advanced guidance and navigation techniques they are used as active sensors which allow to identify the environment to be explored. In particular their application to rover navigation and to the spacecraft landing process has been successful. A natural extension to these applications is the use of cameras associated with advanced Artificial Vision algorithms to aid the Autonomous Navigation both of rover in harsh environment and of spacecraft landing on unknown celestial body like asteroids, comets or moons. 

Cameras sensors are highly adaptable and efficient sensors, and can provide much more information than radar or laser ranging sensors. However the use of Cameras require the development of Artificial Vision Algorithm to analyze images and extract the desired information on the environment. Activities in this field are related to the development of such kind of Artificial Vision Algorithm for the following application:

  • trajectory reconstruction both of descending vehicles and rovers;
  • individuation of safe landing site on unknown celestial body;
  • individuation of obstacles in rover navigation;
  • reconstruction of 3D map of the environment for rover navigation;
  • identification of interesting target for rover navigation.

The trajectory reconstruction is usually done with IMU (inertial measurement unit), which are affected by drift error on the measurements, as they rely on the integration of acceleration for the reconstruction of the trajectory. In landing application and rover navigation the trajectory need to be reconstructed in reference system relative the environment, and IMU provides only absolute measures. Cameras provide measures relative to the environment and so allow a more accurate reconstruction of the relative trajectory. Our research goal is the development of Fast Discrete Optical Flow algorithm which allows the identification of relative velocity. In the case of rover the Discrete Optical Flow is coupled with Stereo Vision to extract also relative position and orientation.

The individuation of safe landing site is essential in autonomous landing application. The research in this field is concentrated on the development of image analysis algorithm which are able to identify craters and flat surfaces which are safer place for landing.



In the field of autonomous rover navigation the research is aimed at three different complementary objectives, the identification of obstacles in the environment necessary in short range navigation, the reconstruction of a 3D Map of the environment, aimed to long range trajectory planning, and the identification of interesting target. Artificial Vision algorithm which use Stereo cameras has been developed to allow the solution of these tasks, in particular the identification of obstacles is based on the combination of Fast Disparity Map and Segmentation techniques, while the reconstruction of the environment is based on a detailed disparity map computation. The identification of interesting target is based on the Data Fusion of information coming from different sensors, such as Stereo cameras, and an an infrared thermo-camera. The process allows to associate to the reconstructed environment information with textures and IR emission allowing the identification of interesting target.

Submenu

« prev top next »

Powered by CMSimple