Document Type

Dissertation

Date of Award

Fall 1-31-2006

Degree Name

Doctor of Philosophy in Computing Sciences - (Ph.D.)

Department

Computer Science

First Advisor

Frank Y. Shih

Second Advisor

James A. McHugh

Third Advisor

Denis L. Blackmore

Fourth Advisor

Yun Q. Shi

Fifth Advisor

Artur Czumaj

Sixth Advisor

Qun Ma

Abstract

Image segmentation and pattern classification have long been important topics in computer science research. Image segmentation is one of the basic and challenging lower-level image processing tasks. Feature extraction, feature reduction, and classifier design based on selected features are the three essential issues for the pattern classification problem.

In this dissertation, an automatic Seeded Region Growing (SRG) algorithm for color image segmentation is developed. In the SRG algorithm, the initial seeds are automatically determined. An adaptive morphological edge-linking algorithm to fill in the gaps between edge segments is designed. Broken edges are extended along their slope directions by using the adaptive dilation operation with suitably sized elliptical structuring elements. The size and orientation of the structuring element are adjusted according to local properties.

For feature reduction, an improved feature reduction method in input and feature spaces using Support Vector Machines (SVMs) is developed. In the input space, a subset of input features is selected by the ranking of their contributions to the decision function. In the feature space, features are ranked according to the weighted support vectors in each dimension.

For object detection, a fast face detection system using SVMs is designed. Twoeye patterns are first detected using a linear SVM, so that most of the background can be eliminated quickly. Two-layer 2nd-degree polynomial SVMs are trained for further face verification. The detection process is implemented directly in feature space, which leads to a faster SVM. By training a two-layer SVM, higher classification rates can be achieved.

For active learning, an improved incremental training algorithm for SVMs is developed. Instead of selecting training samples randomly, the k-mean clustering algorithm is applied to collect the initial set of training samples. In active query, a weight is assigned to each sample according to its distance to the current separating hyperplane and the confidence factor. The confidence factor, calculated from the upper bounds of SVM errors, is used to indicate the degree of closeness of the current separating hyperplane to the optimal solution.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.