
Document Type
Article
Abstract
On October 30, 2023, the Biden Administration issued a sweeping executive order espousing a policy to advance the development and use of artificial intelligence (“AI”) while also establishing safeguards across the federal government. The Executive Order marked the U.S. government’s largest move forward related to the regulation of AI. The Order also represents the government’s latest effort to advance equity, privacy, and national security in the use of AI systems. Moreover, the Order comes at a time when governments around the world are wrestling with the impact of AI and its disruptive effect, not just on markets, but on society as a whole. The Administration’s actions also seem to come at an inflection point within the era of AI and machine learning as a socially disruptive technology. AI has been used in many sectors for decades and is an enabler of almost all industries and facets of life: scientific research, medicine, education, manufacturing, logistics, transportation, defense, law enforcement, politics, advertising, art, culture, and more. However, the accessibility of big data sets and enhanced data processing power in the last two decades has fueled the acceleration, development, and use of AI in the public and private sector. As such, the Order—and other efforts within the Biden Administration—raises questions regarding what the right balance States should strike for supporting the human rights of individuals without stifling the technology and its benefits to societal advancement. In this article, we argue that AI regulation should offer rules that provide parameters that allow innovation to happen without sacrificing the rights of individuals or affected communities. To do so, we take a unique approach and examine the literature in business, science, and technology, and discussions of disruption innovation theory within those fields. Using this theory (and its intersection with a business and human rights paradigm) we conclude that while the Executive Order may suffer from a lack of teeth in its directives and how they affect businesses, this is not a fatal flaw to the advancement of both the technology and its positive impact on human rights. Instead, this method of regulation can actually be turned into an advantage within the rapidly changing machine learning landscape. Specifically, the Executive Order offers some key signaling features to both businesses and other allies regarding what principles the government is choosing to focus on at the intersection of AI and humanity. This signaling can, in turn, be used by legislators in a manner that allows them to craft a comprehensive and deliberate law that would straddle the various interests that we believe should be balanced at this juncture.
Recommended Citation
Jena Martin & Ritu Narula,
Balancing Interests: AI, Business & Human Rights, and the Legal Landscape in an Era of Disruption,
127
W. Va. L. Rev.
1
(2024).
Available at:
https://researchrepository.wvu.edu/wvlr/vol127/iss1/4
Included in
Business Organizations Law Commons, Human Rights Law Commons, Science and Technology Law Commons