top of page

The Sweeping Implications Of A Data-Driven Future



Writer: Annie Yu

Editor: Sam Teisch

Date: 04 November 2023


How advanced is too advanced? As the most intelligent species, a self-imposed title, some humans feel an overwhelming pressure to prevent the development of another entity from surpassing their rank. Artificial intelligence, in particular, has become the newest hot-button issue. Over the past few decades, it has crept into every industry that has the space for it—automobile, fashion, agriculture, manufacturing, marketing, education, and even Hollywood. Its continuing progression has caused a critical need for regulation as major legal concerns have been made apparent.


When new technologies enter the healthcare system, their implications concern each and every one of us. AI-assisted learning can contribute efficiency and effectiveness to a laundry list of tasks: diagnosing, drug discovery, epidemiology, personalized care, and health care delivery [1]. Nevertheless, the US healthcare system cannot reap these benefits unless comprehensive regulation establishes standards of liability and confidentiality along with transparency, the mitigation of bias, and cybersecurity.

Of course, there is the more sinister and dramatized potential of “slaughter bots'' taking over military endeavors. Instead of having a human-like consciousness to premeditate murder based on experiences, emotions, and inner motivations, autonomous weapons systems today use algorithms to identify targets and kill [2]. The lack of remorse and human judgment poses an enormous ethical issue that transcends domestic and foreign security, meaning that strict regulations and human intervention need to be implemented for the safety of all citizens.


On a more localized scale, it seems like everybody has come into contact with AI-created media by now, consciously or not. The widespread accessibility of AI poses profound implications when it gets into the hands of those with malevolent intentions. In the past few years, cases involving individuals having their personal data breached or manipulated have become more common. The FBI even released a warning, stating that people should be more wary of their online data as they can become doctored for sextortion or harassment [3].


As far as the nation’s well-being is concerned, the World Health Organization has outlined six broad areas in which AI regulation is needed: transparency and documentation, risk management, validation of data, assurance of data quality, privacy and data protection, and collaboration between technology and professionals [4]. Specifically, more resources need to be allocated to fund efforts to fill the holes in AI regulation rather than focusing on increasing its efficiency without restriction. For instance, bias in healthcare can mean life or death for a patient, which is why the algorithms in healthcare machine-learning need to be trained to not only see patterns in data but also recognize biases in these patterns and ignore them in treatment. Because AI uses historical data in its algorithms, a primary concern in this industry is to protect marginalized patients from yet another level of inequality brought upon by an additional source of power. Thus, the proposals made by the EU, FDA, and HHS need to be prioritized and administered in the near future so that the nation isn’t stuck playing catch-up with issues that can be addressed today.


Amitai Etzioni’s and Oren Etzioni’s approach to AI elaborates upon these concerns further. They say that the overarching goal is to keep humans as the ultimate authority, using oversight systems, or AI Guardians. In terms of national defense, removing human judgment from the task of killing is clearly ethically and practically problematic. The two scholars suggest “a whole new AI development” that goes back to the basics—a tiered decision-making system [5]. This imposes a hierarchy in the processing system of threats so that any actions carried out are done within specified parameters. This solution points to creating a completely new program instead of revising already-made algorithms to consolidate power among human supervisors. Whereas current AI programs were created with the singular purpose of “increasing the efficiency of the machines they guide,” the new ones would assign oversight to humans [5]. Nations may also look to treaties as a way to internationally ban the use of such autonomous weapons systems. However, arms control treaties are notorious for being ineffective and failing to actually curb weapon usage [6]. So, while international agreements can be considered later, approaches like the Etzioni’s need to be implemented to prevent seriously unethical war crimes. This matter is deemed more pressing when considering that the traceability and transparency of big-data analysis will soon become more hidden and intricate, leaving decision-makers in the dark about how the systems actually work [5].


While the public is largely unaware of the specifics of military missions, many ordinary citizens know too well the dangers of cyberattacks. In a U.S. Senate hearing called “The Need for Transparency in AI,” Sam Gregory, the Executive Director of WITNESS, stressed the urgency of curating legislative solutions for sexual crimes and invasions of privacy carried out by AI [7]. US Senator Cantwell mentioned scam phone calls, deep fakes, and misinformation as examples of synthetically-created media violations. The solution to an issue never-before-present in American history can’t be easy. However, Dr. Ramayka Krishnan and others in the room encourage a focus on increasing disclosure between consumers and the technology they use, traceable watermarking technologies to distinguish between human-generated and AI-generated media, and mandatory risk management programs followed by impact assessments to ensure bias and discrimination are as low as possible [7]. Not only will this framework protect consumers, it will help nourish a more symbiotic relationship between users and the increasingly advanced society they inhabit.


Some may argue: “Why limit technology’s potential for innovation?” The far-reaching benefits of the technology are undeniable. They may point to the fact that Congress can’t even come to a consensus on current data privacy laws. Harvard’s 2016 report on the One Hundred Year Study on Artificial Intelligence noted that regulation deemed overly aggressive yielded counterproductive results such as a “compliance mentality,” which discouraged both “innovation and robust privacy protections” within corporations [8]. Multinational businesses have mastered avoiding fines or punishments rather than adapting to contemporary calls for change. However, the same study pointed to the success of combining a tough approach with broader goals, meaning that private businesses were given more discretion on how to reach such ambitions. Therefore, this argument of “why” highlights the important factor of balance in drafting regulations. On one side, if regulation were to not exist, the balance of power between humans and technology would steer toward AI. On the other, overarching human intervention in the advancement of technology can only lead to an uncreative and uninspired industry. The objective for AI regulation is a fine median.


Mitigating these issues seems impossible when AI developments accelerate every industry they touch. However, carefully introducing regulations can provide a foundation for a promising future of technology that minimizes harm to humans. The benefits and potential that AI has and will continue to offer to the planet are massive, trillions-of-dollars-massive, but policymakers must maneuver these additions with intention and caution in order to make the most of such a powerful tool.

 

References

(n.d.). Homepage - Autonomous Weapons Systems. Retrieved October 30, 2023, from


(n.d.). WITNESS | Human Rights Video. Retrieved October 30, 2023, from


Díaz, R. (2022, October 2). . . - YouTube. Retrieved October 30, 2023, from


Díaz, R. (2022, October 2). . . - YouTube. Retrieved October 30, 2023, from


Do Arms Control Treaties Work? Assessing the Effectiveness of the Nuclear Nonproliferation

Treaty1 Matthew Fuhr. (n.d.). Yonatan Lupu. Retrieved October 30, 2023, from https://yonatanlupu.com/Fuhrmann%20Lupu%20NPT.pdf


Enikeev, D. (2022, March 14). Legal and Ethical Consideration in Artificial Intelligence in

Healthcare: Who Takes Responsibility? NCBI. Retrieved October 30, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8963864/#B25


Etzioni, A., & Etzioni, O. (n.d.). Should Artificial Intelligence Be Regulated? Issues in Science

and Technology. Retrieved October 30, 2023, from https://issues.org/perspective-artificial-intelligence-regulated/


EUROPEAN COMMISSION Brussels, 21.4.2021 COM(2021) 206 final 2021/0106 (COD)

Proposal for a REGULATION OF THE EUROPEAN PARLIAMEN. (n.d.). EUR-Lex. Retrieved October 30, 2023, from https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF#page=4


FDA Releases Artificial Intelligence/Machine Learning Action Plan. (2021, January 12). FDA.


Guidelines for the Future | One Hundred Year Study on Artificial Intelligence (AI100). (n.d.).

One Hundred Year Study on Artificial Intelligence (AI100). Retrieved October 30, 2023, from https://ai100.stanford.edu/2016-report/section-iii-prospects-and-recommendations-public-policy/ai-policy-now-and-future-0


Lu, Y. (2023, June 14). Generative A.I. Can Add $4.4 Trillion in Value to Global Economy,

Study Says. The New York Times. Retrieved October 30, 2023, from https://www.nytimes.com/2023/06/14/technology/generative-ai-global-economy.html


McGrail, S. (2021, January 29). HHS Unveils Strategy for Artificial Intelligence in Healthcare.


The Need for Transparency in Artificial Intelligence - U.S. Senate Committee on... (2023,

September 12). Senate Commerce Committee. Retrieved October 30, 2023, from https://www.commerce.senate.gov/2023/9/the-need-for-transparency-in-artificial-intelligence


Policy and Legal Considerations | One Hundred Year Study on Artificial Intelligence (AI100).

(n.d.). One Hundred Year Study on Artificial Intelligence (AI100). Retrieved October 30, 2023, from https://ai100.stanford.edu/2016-report/section-iii-prospects-and-recommendations-public-policy/ai-policy-now-and-future/policy


Satter, R. (2023, June 7). FBI says artificial intelligence being used for 'sextortion' and


WHO outlines considerations for regulation of artificial intelligence for health. (2023,


WITNESS | Sam Gregory - WITNESS. (n.d.). Witness.org. Retrieved October 30, 2023, from






20 views0 comments

Recent Posts

See All

Comments


bottom of page