Authors

Luisa Le Donne

Type

Text

Type

Dissertation

Advisor

Fontanini, Alfredo | La Camera, Giancarlo | Kritzer, Mary | Luhmann, Christian.

Date

2017-05-01

Keywords

Computational Neuroscience, decision making, reinforcement learning, spiking neural network, stimulus segmentation | Neurosciences

Department

Department of Neuroscience

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/76567

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

Stimulus identification is the process of picking out a particular stimulus among many other stimuli which may be present in the environment, for the purpose of performing a task. In the most interesting but also demanding scenario, the problem amounts to extracting action-relevant segments out of a noisy input stream, thus also involving the extrapolation of the onset and the end of the stimulus, not known a priori. Existing models in Computational Neuroscience and Artificial Intelligence have focused on the problem of discovering the correct decision in response to given stimuli -- typically, for the purpose of getting reward. However, in these models the relevant stimuli (i.e. | the stimuli that can trigger rewarded decisions) are known to the agent. For example, in many neural circuit models of decision-making, each stimulus is encoded by the activation of a predefined population of neurons representing that stimulus. Instead, an autonomous learning system should be able to identify any stimulus that is action-relevant without prior knowledge of it. Recently, a theory is emerging on how to address this problem using populations of spiking neurons. In this thesis, I study and extend a prototypical model based on populations of spiking neurons (the agent) able to identify the relevant stimuli and make the correct decisions. The agent is rewarded for making correct decisions at the right time; since the agent does not know, a priori, which stimuli are relevant or when they start or end, a decision is never enforced - contrary to most existing models. Instead, a decision is taken only when a readout of the decision neurons (a neural correlate of decision confidence) crosses a threshold. The learning rule implements a form of synaptic plasticity that tries to maximize the average reward obtained for correct decisions by following the gradient of the average reward. After showing the main features of the model, I characterize the dependence of its performance on crucial parameters including the number of stimuli, the type of stimuli (i.e. | the way they are encoded), the number of decisions and the number of decision neurons. I then show that the model can handle natural stimuli recorded from the cortex of behaving rats. This model represents the first biologically plausible solution to the problem of stimulus segmentation and decision-making including multiple-choice decision-making. | 81 pages

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.