Author

Xue Yang

Date of Graduation

2014

Document Type

Thesis

Degree Type

MS

College

Statler College of Engineering and Mineral Resources

Department

Lane Department of Computer Science and Electrical Engineering

Committee Chair

Thirimachos Bourlai

Committee Co-Chair

Jeremy Dawson

Committee Member

Yuxin Liu

Abstract

In this work, we study the problem of human identity recognition using human respiratory waveforms extracted from videos combined with component-based off- angle human facial images. Our proposed system is composed of (i) a physiology- based human clustering module and (ii) an identification module based on facial features (nose, mouth, etc.) fetched from face videos. In our proposed methodology we, first, manage to passively extract an important vital sign (breath), cluster human subjects into nostril motion vs. nostril non-motion groups, and, then, localize a set of facial features, before we apply feature extraction and matching.;Our novel human identity recognition system is very robust, since it is working well when dealing with breath signals and a combination of different facial components acquired in uncontrolled luminous conditions. This is achieved by using our proposed Motion Classification approach and Feature Clustering technique based on the breathing waveforms we produce. The contributions of this work are three-fold. First, we collected a set of different datasets where we tested our proposed approach. Specifically, we considered six different types of facial components and their combination, to generate face-based video datasets, which present two diverse data collection conditions, i.e. videos acquired in head fully frontal position (baseline) and head looking up pose. Second, we propose a new way of passively measuring human breath from face videos and show comparatively identical output against baseline breathing waveforms produced by an ADInstruments device. Third, we demonstrate good human recognition performance when using the pro- posed pre-processing procedure of Motion Classification and Feature Clustering, working on partial features of human faces.;Our method achieves increased identification rates across all datasets used, and it manages to obtain a significantly high identification rate (ranging from 96%-100% when using a single or a combination of facial features), yielding an average of 7% raise, when compared to the baseline scenario. To the best of our knowledge, this is the first time that a biometric system is composed of an important human vital sign (breath) that is fused with facial features is such an efficient manner.

Share

COinS