Situational awareness system for autonomous sailing(graduation project)
In an era marked by a vigorous advance in fully automating and self-driving cars, the maritime industry, as with the automotive
industry, is also undergoing this slow transition towards autonomous surface vehicles. The reason behind this ongoing transition lies
in the increasing number of collisions within the maritime environment. These collisions are not caused by a lack of navigational
skills but by human operator’s error, resulting from poor visibility and negligence. For this reason, the maritime industry is focused
on developing collision avoidance systems for supporting human operators. However, there are still several key challenges before
fully Autonomous Surface Vehicles (ASV) become a reality. One of these challenges is providing an optimal collision-free trajectory in
a variety of cluttered environments without any human intervention.
The collision-free trajectory relies mainly on the ability of ASV to monitor its surroundings and sense all environmental information
in real-time. This requires a situational awareness (SA) system equipped with various sensor technologies capable of distance
measurement and 3D mapping. And a perception system based on the sensor input provides a reliable representation of static and
dynamic objects in the vehicle’s surroundings and determines the location, path, and kinematics (e.g., velocity and heading) of those
objects.
In this project, a methodology for a robust perception system with the primary goal of real-time multi object detection and tracking is
presented, which employs LiDAR (Light Detection and Ranging) sensor that offers precise 3D measurement data over long ranges in
challenging weather and lighting conditions with centimeter-level accuracy. This is accomplished by realizing three reusable and
independent subsystems:
Starting with the Preprocessing Subsystem, which retrieves 3D measurement data (point cloud) from LiDAR and eliminates the false
positive generated by aerated water through water surface estimation. Furthermore, the noise and outliers are filtered out using
Crop Box and Voxel Grid filters.
The Detection subsystem receives the filtered point cloud from Preprocessing subsystem and performs segmentation to identify
any potential waterborne obstacles using DBSCAN (Density-based spatial clustering of applications with noise). Moreover, it
estimates the obstacle’s pose (e.g., centroid, dimension, orientation) by utilizing the covariance-based method. The centroid
measurement obtained by the Detection subsystem is tracked using Global Nearest Neighbor (GNN) data association that associates
the detections in the presence of clutter and missed detection. Followed by an Interactive Multiple Model-Unscented Kalman Filter
(IMM-UKF) that handles the different patterns and the nonlinearities of obstacle motion. The GNN-IMM-UKF concludes the Tracking
subsystem process by providing a list, which includes potential waterborne obstacles with their position, heading, velocity, angular
velocity, and path over time.
The realization process of the methodology is carried out as efficiently as possible, considering the computing capabilities of the
vehicle-embedded board (e.g., Raspberry Pi 4), real-time criteria, and other constraints with regard to the maritime environment.
The evaluation shows satisfactory results, reflecting the successful realization of the methodology, and it is considered a reasonable
basis for object detecting and tracking system.