Author ORCID Identifier

https://orcid.org/0009-0008-6072-6520

Semester

Fall

Date of Graduation

2025

Document Type

Thesis

Degree Type

MS

College

Statler College of Engineering and Mineral Resources

Department

Lane Department of Computer Science and Electrical Engineering

Committee Chair

Jeremy Dawson

Committee Member

Prashnna Gyawali

Committee Member

Sarika Khushalani-Solanki

Abstract

Image classification and segmentation are crucial tasks in many fields. Neural network models have evolved from simple binary classifiers to multiclass solutions capable of handling broader and more complex scenarios. The iris recognition system developed by Daugman introduced an algorithm that detects iris boundaries and repeatedly encodes iris regions into phase sequences to achieve accurate identification. While the numerical representation of iris patterns enables fast and reliable matching, visual representations are often more intuitive for human interpretation, as they directly illustrate the distinctive iris structures. Vision Transformer (ViT)-based models have shown remarkable performance in learning and interpreting image information. Meta’s Segment Anything Model (SAM), a large foundation model, demonstrated strong zero-shot segmentation performance due to its robust architecture and extensive pretraining on large-scale datasets. Similarly, biomedical image segmentation has shown the capability of deep learning models to capture fine-grained structural details, which can be adapted for iris feature segmentation. This thesis aims to fine-tune the SAM model for segmenting subfeatures of the human iris in a semi-automated workflow to support research and applications that benefit from visual representations of iris patterns. The dataset used in this work was provided by the WVU Biometrics Lab. An ablation study was conducted to evaluate the model’s performance under different weight configurations and input types, contributing to improved accuracy in multiclass iris feature segmentation. The proposed model achieved an average Intersection over Union (IoU) of 0.713 on the validation set, demonstrating effective segmentation of fine iris substructures guided by human inputs. These findings highlight the potential of adapting foundation models like SAM for accurate, interpretable, and efficient segmentation of complex biometric features.

Available for download on Friday, December 11, 2026

Share

COinS