Type

Text

Type

Dissertation

Advisor

Ramakrishnan, C.R. | Ramakrishnan, I.V. | Warren, David | Costa, Vitor.

Date

2012-12-01

Keywords

Computer science

Department

Department of Computer Science.

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/77289

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

Statistical Relational Learning (SRL), an emerging area of Machine Learning, aims at modeling problems which exhibit complex relational structure as well as uncertainty. It uses a subset of first-order logic to represent relational properties, and graphical models to represent uncertainty. Probabilistic Logic Programming (PLP) is an interesting subfield of SRL. A key characteristic of PLP frameworks is that they are conservative extensions to non-probabilistic logic programs which have been widely used for knowledge representation. PLP frameworks extend traditional logic programming semantics to a distribution semantics, where the semantics of a probabilistic logic program is given in terms of a distribution over possible models of the program. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use of random variables with continuous distributions. In this thesis, we extend PRISM, a well-known PLP language, with Gaussian random variables and linear equality constraints over reals. We provide a well-defined distribution semantics for the extended language. We present a symbolic inference and parameter-learning algorithms for the extended language that represents sets of explanations without enumeration. This permits us to reason over complex probabilistic models such as Kalman filters and a large subclass of Hybrid Bayesian networks that were hitherto not possible in PLP frameworks. The inference algorithm can be extended to handle programs with Gamma-distributed random variables as well. An interesting aspect of our inference and learning algorithms is that they specialize to those of PRISM in the absence of continuous variables. By using PRISM as the basis, our inference and learning algorithms match the complexity of known specialized algorithms when applied to Hidden Markov Models, Finite Mixture Models and Kalman Filters. | 119 pages

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.