Date of Graduation

2020

Document Type

Thesis

Degree Type

MS

College

Statler College of Engineering and Mineral Resources

Department

Mechanical and Aerospace Engineering

Committee Chair

Yu Gu

Committee Member

Jason Gross

Committee Member

Powsiri Klinkhachorn

Abstract

In terms of navigation, a central problem in the field of autonomous robotics is obstacle avoidance. This research explores how to navigate as well as avoid obstacles by leveraging what is known of the environment to determine decisions with new incoming information during execution. The algorithm presented in this work is divided into two procedures: an offline process that uses prior knowledge to navigate toward the goal; and an online execution strategy that leverages results obtained offline to drive safely towards the target when new information is encountered (e.g., obstacles). To take advantage of what is known offline, the navigation problem was formulated as a Markov Decision Process (MDP) where the environment is characterized as an occupancy grid. Baseline dynamic programming techniques were used to solve this, producing general behaviors that drive the robot (or agent) toward the goal and a value function which encodes the value of being in particular states. Then during online execution, the agent uses these offline results and surrounding local information of the environment to operate (e.g., data from a LIDAR sensor). This locally acquired information, which may contain new data not seen prior, is represented as a small occupancy grid and leverages the offline obtained value function to define local goals allowing the agent to make short term plans. When the agent encounters an obstacle locally, the problem becomes a Partially Observable Markov Decision Process (POMDP) since it is uncertain where these obstacles will be in the next state. This is solved by utilizing an approximate planner (QMDP) that uses uncertainty of the obstacle motion and considers all possible obstacle state combinations in the next time step to determine the best action. The approximate planner can quickly solve the POMDP, due to the small size of the local occupancy grid and by using the behaviors produced offline to help speed up convergence, which opens the possibility for this procedure to be executed in real time, on a physical robot. Two simulated environments were created, varying in complexity and dynamic obstacles. Simulation results under complex conditions with narrow operable spaces and many dynamic obstacles show the proposed algorithm has approximately an 85% success rate, in test cases with cluttered environments and multiple dynamic obstacles, and is shown to produce safer trajectories than the baseline approach, which had roughly a 37% success rate, under the assumptions that dynamic obstacles can only move a short distance by the next time step.

Share

COinS