Date of Graduation


Document Type


Degree Type



Statler College of Engineering and Mineral Resources


Lane Department of Computer Science and Electrical Engineering

Committee Chair

Natalia Schmid

Committee Co-Chair

Brian Woerner

Committee Member

Brian Woerner

Committee Member

Donald Adjeroh


With the rapid globalization of technology in the world, the need for a more reliable and secure online method of authentication is required. This can be achieved by using each individual’s distinctive biometric identifiers, such as the face, iris, fingerprint, palmprint, etc.; however, there is a bound to the uniqueness of each identifier and consequently, a limit to the capacity that a biometric recognition system can sustain before false matches occur. Therefore, knowing the limitations on the maximum population that a biometric modality can uniquely represent is essential now more than ever. In an effort to address the general problem, we turn to the use of iris biometrics to measure its uniqueness.

The measure of iris uniqueness was first introduced by John Daugman in 2003 and its analysis since then remained an open research problem. Daugman defines uniqueness as the ability to enroll more and more classes into a recognition system while the probability of collision among the classes remains fixed and near zero. Due to errors while collecting these datasets (such as occlusions, illumination conditions, camera noise, motion, and out-of-focus blur) and quality degradation from any signal processing of the iris data, even the highest in-quality datasets will not approach a perfect zero probability of collision. Because of this, we appeal to techniques presented in information theory to analyze and find the maximum possible population the system can support while also measuring the quality of the iris data present in the datasets themselves.

The focus of this work is divided into two new techniques to find the maximum population of an iris database: finding the limitations of Daugman's widely accepted IrisCode and proposing a new methodology leveraging the raw iris data. Firstly, Daugman's IrisCode is defined as binary templates representing each independent class present in the database. Through the assumption that a one-to-one encoding technique is available to map the IrisCode of each class to a new binary codeword with the length determined by the degrees of freedom inferred from the distribution of distances between each pair of independent class IrisCodes, we can appeal to Rate-Distortion Theory (limits of error-correcting codes) to establish bounds on the maximum population the IrisCode algorithm can sustain using the minimum Hamming distance (HD) between codewords as a quality metric. Our second approach leverages an Autoregressive (AR) model to estimate each iris class's distinctive power spectral densities and then assume a similar one-to-one mapping of each iris class to a unique Gaussian codeword. A Gaussian Sphere Packing Bound is invoked to realize the maximum population of the dataset and measure the iris quality dependent on the noise present in the data. Another bound, the Daugman-like Bound, is developed that uses the relative entropy between models of classes as a distance metric, like Hamming distance, to find the maximum population given a fixed recognition error for the system. Using these two approaches, we hope to help researchers understand the limitations present in their recognition system depending on the quality of their iris database.