Semester

Spring

Date of Graduation

2022

Document Type

Thesis

Degree Type

MS

College

Statler College of Engineering and Mineral Resources

Department

Mechanical and Aerospace Engineering

Committee Chair

Jason Gross

Committee Co-Chair

Guilherme Pereira

Committee Member

Yu Gu

Abstract

LiDAR-based object detection methods have drawn sufficient attention from academia and industry due to the increase in the use of the LiDAR sensor for autonomous driving. Insensitivity to light and the ability to capture high spatial information of the encircling environment coupled with the steady reduction in purchase cost are features that make the LiDAR appealing for autonomous driving. The detection methods can process and evaluate the sensor data to identify objects and calculate the relative position of the objects to the sensor. While these methods are currently applied to autonomous driving, they could also be used in robotics for localization and navigation purposes. For collaborative operations of robotic agents in regions with unreliable Global Navigation Satellite System (GNSS) connections, such as caves, tunnels, and mines, the robots can use LiDAR data to map such areas and localize the robots to each other.
An ongoing research project conducted by the WVU Navigation Laboratory is a collaborative UAV-UGV operation that exploits the LiDAR sensor for relative position estimation of the UAV from the UGV. The estimation is achieved by segmenting the sensor data with measurements from the UAV’s onboard altimeter and ranging radio, and then Euclidean clustering is performed on the segmented data. The centroid of the cluster that satisfies a set of predefined heuristics is determined as the position of the UAV. A major challenge of this method is that the algorithm cannot determine if the selected cluster is the UAV. A LiDAR-based 3D object detection has been selected to combat this challenge.
This thesis implements a 3D detection algorithm, the PointPillars Network in MATLAB, to estimate the position of a UAV using only point-cloud data obtained from a LiDAR sensor. By incorporating both column voxel representation and 2D CNN to generate discriminative point-cloud features, a set of points representing the UAV can be obtained. Finally, point clouds from recorded data are used to evaluate the accuracy of the network and acquire the solution for the relative position of the UAV to the UGV. The estimated position is compared to the solution obtained by the current localization method and a reference truth solution.

Embargo Reason

Publication Pending

Share

COinS