Author ORCID Identifier

https://orcid.org/0000-0002-4184-1178

Semester

Summer

Date of Graduation

2024

Document Type

Dissertation

Degree Type

PhD

College

Statler College of Engineering and Mineral Resources

Department

Mechanical and Aerospace Engineering

Committee Chair

Yu Gu

Committee Co-Chair

Jason Gross

Committee Member

Jason Gross

Committee Member

Natalia Schmid

Committee Member

Nicholas Szorcinski

Committee Member

Xi Yu

Abstract

As robots adopt more real world responsibilities, they will be expected to solve more complicated problems. In some cases limited prior knowledge will result in unmodelled environmental conditions; in others, multiple users may have competing perspectives on how to frame a decision problem. Many existing frameworks, namely Markov decision processes (MDP) presuppose users have identified a specific problem with models sufficient to solve or learn a problem. If we wish to extend MDPs to novel problems or those heavily dependent on user feedback, autonomous decision makers must be able to identify limitations in how a given problem is framed and use this to produce better representations.

Central to the framing of a decision problem, and knowledge more broadly, is uncertainty. Unfortunately, prominent concepts of uncertainty preclude decision makers from considering alternative problem formulations. This is due to their conceptual emphasis on the “aleatoric/epistemic divide”—loosely organized around whether or not randomness inherent in the environment. Despite its widespread use in fields from robotics to healthcare to public policy and economics, such a distinction is a rather fluid boundary and has led to conflicting terminology. Instead, this work proposes to frame uncertainties with respect to a decision maker’s subjective understanding to better frame their knowledge of a problem. This, in turn, motivates the need for decision makers to account for equally valid alternatives (ambiguity) and identify the presence of unmodelled behaviors (ignorance). This work focuses on the prior though both of have gone understudied.

These conceptual breakthroughs are used to develop new decision making algorithms. Early work integrates ambiguous representations of uncertainty into the learning process. This lets the agent solve for a set of policies at once. Based on user preference, the agent selects how many unmodelled risks it is willing to take on. Results from a sailing environment show an agent can mitigate unmodelled risks while reaching its goal effectively. Later work introduces a formulation for multiple model MDPs (MM-MDP) for representing ill-posed decision problems. This MM-MDP allows for models to vary in state space, action space, transition models, and rewards. Thus, users with different objectives and knowledge about systems can frame problems at different scopes within a common framework. Using a foraging case study, an algorithm is introduced to balance the use of the supplied models. This case study considers scenarios isolating specific categories of MDPs. Thus, users with different objectives and knowledge about systems can frame problems at different scopes within a common framework.

Share

COinS