Type

Text

Type

Dissertation

Advisor

Zhu, Wei | Xing, Haipeng | Hu, Jiaqiao | Zhang, Minghua.

Date

2012-12-01

Keywords

Continuous State Space, Partially Observable Markov Decision Processes | Mathematics

Department

Department of Applied Mathematics and Statistics

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/71296

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

This dissertation focuses on training autonomous agents to plan and act under uncertainty, specifically for cases where the underlying state spaces are continuous in nature. Partially Observable Markov Decision Processes (POMDPs) are a class of models aimed at training agents to seek high rewards or low costs while navigating a state space without knowing their true location. Information regarding an agent's location is gathered in the form of possibly nonlinear and noisy measurements as a function of the true location. An exactly solved POMDP allows an agent to optimally balance seeking rewards and seeking information regarding its position in state space. It is computationally intractable to solve POMDPs for state domains that are continuous, motivating the need for efficient approximate solutions. The algorithm considered in this thesis is the Parametric POMDP (PPOMDP) method. PPOMDP represents an agent's knowledge as a parameterised probability distribution and is able to infer the impact of future actions and observations. The contribution of this thesis is in enhancing the PPOMDP algorithm making significant improvements in training and plan execution times. Several aspects of the original algorithm are generalized and the impact on training time, execution time, and performance are measured on a variety of classic robot navigation models in the literature today. In addition, a mathematically principled threefold adaptive sampling scheme is implemented. With an adaptive sampling scheme the algorithm automatically varies sampling according to the complexity of posterior distributions. Finally, a forward search algorithm is proposed to improve execution performance for sparse belief sets by searching several ply deeper than allowed by previous implementations. | 132 pages

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.