Establishing an Ethical Framework of Legal Accountability for Artificial Intelligence
Document Type
Conference Proceeding
Publication Date
1-1-2023
Abstract
The proliferation of generative AI and digital technology infiltrates every aspect of our lives, from commonplace consumer goods to connected urban ecosystems and advanced workplaces. These emerging technologies inevitably evoke ethical quandaries that have ignited spirited public discourse. As generative AI burgeons, its capacity for self-evolution engenders the prospect that AI systems might make unforeseeable decisions beyond even the most adroit and ethically-minded developers’ anticipations. Companies face public censure, reputational erosion, and potential litigation for tort-related damages should their products inflict discernible harm upon consumers or society. An imminent ethical question emerges: can and should companies be held accountable for products designed to evolve? Establishing a comprehensive legal framework governing liability for unanticipated consequences of generative AI is imperative, and must begin with an ethical understanding of design decisions, intellectual property rights, and consequences of generative AI development for consumers and society. Given the sheer magnitude and pervasiveness of generative AI, it is critical to first determine responsible AI design standards to guide companies, developers, and adjudicators. The following overarching points should be considered: (1) Responsible Design & Development: Responsible Design Standards for AI must be established to provide in three key research domains: a) Reliability & Performance: Generative AI design must ensure consistency, minimize errors and enhance reliability across applications and contexts. b) Control & Safety: Generative AI systems must be developed with sufficient safety measures, control mechanisms, and user autonomy; establishing guidelines to mitigate risks and ensure overall safety; and identifying legal mechanisms to enforce related provisions under Tort law. c) Privacy & Respect: Personal privacy, respect for diversity, and discrimination prevention must be considered when ensuring generative AI design aligns with societal values and ethical tenets. (2) IP Accountability: In the epoch of generative AI, a swift revamping of IP laws is indispensable. In a global software marketplace, where systems can be pieced together from multiple sources, precise identification and ownership of inventors for each component within a final product, via licenses or patents, are crucial. Thus, IP policy must ensure transparency of ownership and development. Research inquiries within this framework also include: adapting extant IP laws to tackle the singular challenges of AI technologies; establishing legal mechanisms to safeguard equitable attribution and IP rights protection for AI-generated content and innovations; and discerning circumstances in which current IP holders of an AI system bear responsibility, as opposed to original developers or portions of their systems. (3) Dynamic AI Ethics & Accountability: Impose responsibility upon manufacturers for damages ensuing from evolving AI systems that adapt to users’ requirements, guaranteeing the proper assignment of liability as the AI system progresses over time. (4) End User Responsibility: Establish guidelines to determine when end-users bear responsibility for harm caused by improper use of a generative AI system, relieving the company from liability in such cases.
Identifier
85192946066 (Scopus)
ISBN
[9781713893592]
Publication Title
29th Annual Americas Conference on Information Systems Amcis 2023
Recommended Citation
Eisenberg, David and Abhari, Kaveh, "Establishing an Ethical Framework of Legal Accountability for Artificial Intelligence" (2023). Faculty Publications. 2048.
https://digitalcommons.njit.edu/fac_pubs/2048