Semester

Spring

Date of Graduation

2021

Document Type

Thesis

Degree Type

MS

College

Statler College of Engineering and Mineral Resources

Department

Lane Department of Computer Science and Electrical Engineering

Committee Chair

Xin Li

Committee Member

Natalia Schmid

Committee Member

Matthew Valenti

Abstract

The ability to determine the legitimacy of a person’s face in images and video can be important for many applications ranging from social media to border security. From a biometrics perspective, altering one’s appearance to look like a target identity is a direct method of attack against the security of facial recognition systems. Defending against such attacks requires the ability to recognize them as a separate identity from their target. Alternatively, a forensics perspective may view this as a forgery of digital media. Detecting such forgeries requires the ability to detect artifacts not commonly seen in genuine media. This work examines two cases where we can classify faces as real or fake within digital media and explores them from the perspective of the attacker and defender.

First, we will explore the role of the defender by examining how deepfakes can be distinguished from legitimate videos. The most common form of deepfakes are videos which have had the face of one person swapped with another, sometimes referred to as “face-swaps.” These are generated using Generative Adversarial Networks (GANs) to produce realistic augmented media with few artifacts noticeable to human observers. This work shows how facial expression data can be extracted from deepfakes and legitimate videos to train a machine learning model to detect these forgeries.

Second, we will explore the role of the attacker by examining a problem of increasing importance to border security. Face morphing is the process by which two or more peoples’ facial features may be combined in one image. We will examine the process by which this can be done using GANs, and traditional image processing methods in tandem with machine learning models. Additionally, we will evaluate their effectiveness at fooling facial recognition systems.

Share

COinS