Type

Text

Type

Dissertation

Advisor

Arkin, Esther | Hu, Jiaqiao | Deng, Yuefan | Ortiz, Luis.

Date

2014-12-01

Keywords

Applied mathematics | Electronic Discovery, Markov Decision Process (MDP), Multi-Armed Bandit (MAB), Optimization Under Uncertainties, Sampling, Stochastic Scheduling

Department

Department of Applied Mathematics and Statistics.

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/76062

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

The focus of this work is on practical applications of stochastic multi-armed bandits (MABs) in two distinctive settings. First, we develop and present REGA, a novel adaptive sampling-based algorithm for control of finite-horizon Markov decision processes (MDPs) with very large state spaces and small action spaces. We apply a variant of the epsilon-greedy multi-armed bandit algorithm to each stage of the MDP in a recursive manner, thus computing an estimation of the " reward-to-go" value at each stage of the MDP. We provide a finite-time analysis of REGA. In particular, we provide a bound on the probability that the approximation error exceeds a given threshold, where the bound is given in terms of the number of samples collected at each stage of the MDP. We empirically compare REGA against other sampling-based algorithms and find that our algorithm is competitive. We discuss measures to aid against the curse of dimensionality due to the backwards induction nature of REGA, necessary when the MDP horizon is large. Second, we introduce e-Discovery, a topic of extreme significance to the legal industry, which pertains to the ability of sifting through large volumes of data in order to identify the " needle in the haystack" documents relevant to a lawsuit or investigation. Surprisingly, the topic has not been explicitly investigated in academia. Looking at the problem from a scheduling perspective, we highlight the main properties and challenges pertaining to this topic and outline a formal model for the problem. We examine an approach based on related work from the field of scheduling theory and provide simulation results that demonstrate the performance of our approach against a very large data set. We also provide an approach based on list-scheduling that incorporates a side multi-armed bandit in lieu of standard heuristics. Necessarily, we propose the first MAB algorithm that accounts for both sleeping bandits and bandits with history. The empirical results are encouraging. Surveys of multi-armed bandits as well as scheduling theory are included. Many new and known open problems are proposed and/or documented. | 118 pages

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.