Authors

Yuan Zhao

Type

Text

Type

Dissertation

Advisor

Ahn, Hongshik | Park, Il Memming | Hong, Sangjin. | Finch, Stephen

Date

2016-12-01

Keywords

Count, Decision tree, Dynamics, Log-linear model, Variational Bayes | Statistics -- Neurosciences

Department

Department of Applied Mathematics and Statistics

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/77432

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

Events that occur randomly over time or space result in count data. Poisson models are widely used for analyses. However, simple log-linear forms are often insufficient for complex relationship between variables. Thus we study tree-structured log-linear models and latent variables models for count data. First, we consider extending Poisson regression for independent observations. Decision trees exhibit the advantage of interpretation. Constant fits are too simple to interpret within strata nonetheless. We hence propose to embed log-linear models to decision trees, and use negative binomial distribution for overdispersion. Second, we consider latent variable models for point process observation in neuroscience. Neurons fire sequences of electrical spikes as signals which can naturally be treated as point processes disregarding the analog difference. Large scale neural recordings have shown evidences of low-dimensional nonlinear dynamics which describe the neural computations implemented by a large neuronal network. Sufficient redundancy of population activity would give us access to the underlying neural process of interest while observing only a small subset of neurons for understanding how neural systems work. Thus we propose a probabilistic method that recovers the latent trajectories non-parametrically under a log-linear generative model with minimal assumptions. Third, we are aim to model the continuous dynamics to further understand the neural computation. Theories of neural computation are characterized by dynamic features such as fixed points and continuous attractors. However, reconstructing the corresponding low-dimensional dynamical system from neural time series are usually difficult. Typical linear dynamical system and autoregressive models either are too simple to reflect complex features or sometimes extrapolate wildly. Thus we propose a flexible nonlinear time series model that directly learns the velocity field associated with the dynamics in the state space and produces reliable future predictions in a variety of dynamical models and in real neural data. | 113 pages

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.