Authors

Shung Han Cho

Type

Text

Type

Dissertation

Advisor

Murali Subbarao | Hong, Sangjin | Monica Fernandez-Bugallo | Hongshik Ahn.

Date

2010-08-01

Keywords

Heterogeneous sensor network, Multiple camera collaboration, Multiple object association, Multiple object identification, Multiple object tracking, Self-localization | Computer Engineering

Department

Department of Computer Engineering

Language

en_US

Source

This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree.

Identifier

http://hdl.handle.net/11401/70965

Publisher

The Graduate School, Stony Brook University: Stony Brook, NY.

Format

application/pdf

Abstract

Multiple object tracking and association are key capabilities in mobile sensor based applications (i.e. | a large scale flexible surveillance system and multiple robots application system). Such systems track and identify multiple objects autonomously and intelligently without human operators. They also flexibly control deployed sensors to maximize resource utilization as well as system performance. Moreover, methodologies for the tracking and association should be robust against non-ideal phenomena such as false or failed data processing. In this thesis, we address various issues and present approaches to resolve them in collaborative and heterogeneous single processing for the applications.Multiple object association (finding the correspondence of objects among cameras) is an important capability in multiple cameras environment. We introduce a locally initiating line-based object association to support flexible camera movements. The method can be extended to support multiple cameras through pair-wise collaboration for the object association. While the pair-wise collaboration is effective for objects with the enough separation, the association is not well-established for objects without the enough separation and it may generate the false association. We extend the locally initiating homographic lines based association method to two different multiple camera collaboration strategies that reduce the false association. Collaboration matrices are defined with the required minimum separation for an effective collaboration. The first strategy uses the collaboration matrices to select the best pair out of many cameras having the maximum separation to efficiently collaborate on the object association. The association information in selected cameras is propagated to unselected cameras by the global information constructed from the associated targets. While the first strategy requires the long operation time to achieve the high association rate due to the limited view by the best pair, it reduces the computational cost using homographic lines. The second strategy initiates the collaboration process of objects association for all the pairing cases of cameras regardless of the separation. While the repetitive association processes improve the association performance, the transformation processes of homographic lines increase exponentially.Identification of tracked objects is achieved by using two different signals. The RFID tag is used for object identification and a visual sensor is used for estimating object movements. Visual sensors find the correspondence among cameras and localize them. An association of tracked positions with identifications utilizes object dynamics of crossing the modeled boundary of identification sensors. The proposed association method provides association recovery against tracking and association failure. We also consider coverage uncertainty induced by identification signal characteristics or multiple objects near the boundary of identification sensor coverage. A group and incomplete group association are introduced to resolve identification problems with coverage uncertainty. The simulation results demonstrate the stability of the proposed method against non-ideal phenomena such as false detection, false tracking, and inaccurate coverage model.Finally, a novel self localization method is presented to support mobile sensors. The algorithm estimates the coordinate and the orientation of mobile sensor using projected references on visual image. The proposed method considers the lens non-linearity of the camera and compensates the distortion by using a calibration table. The algorithm can be utilized in mobile robot navigation as well as positioning application where accurate self localization is necessary.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.