Blog

Legal Limits of AI

January 8, 2024

What is the impact of legal limits of AI?   

Celebrated comedian, Sarah Silverman, has recently sued Meta for using her copyrighted books to train generative AI. However, her lawsuit has encountered serious pushback. Last Monday, U.S. District Judge Vince Chhabria censured the idea that Meta’s AI system is itself an infringing derivative. “This is nonsensical,” he wrote in the order. “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.” Sarah Silverman’s lawsuit has raised questions about the legal limits of AI, and this blog addresses AI’s impact on:   

  • Intellectual property rights  
  • Data privacy  
  • Fairness 
  • Cybersecurity  

Intellectual property rights 

While AI itself is clearly the property of its developer, when generative AI is used to create original content, who owns the property rights to that content?  

Federal courts have repeatedly affirmed that “human authorship is an essential part of a valid copyright claim,” reasoning that only humans need copyright as an incentive to create their works.  They denied a copyright claim of a graphic novel that used generative AI from original text. If they cannot be copyrighted, then by default all works produced by AI are considered common to the public sector.   

Data privacy  

AI requires huge amounts of data. The amount of data needed to run a program depends on the complexity of the problem and learning algorithms. For large-scale generative AI, that data can often be of a private or personal nature. All data collection should follow the parameters set in the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). 

At Movius, we are deeply committed to privacy, and that extends to our AI tools.   MultilIne™ by Movius never touches your personal communication. All of the data used by Movius AI solutions like TimeWize, Clare, and Arya remain encrypted and secure. 

Legal limits of AI

The use of AI in law and policy must be fair, transparent, and ethical. Some ethical concerns associated with the legal limits of AI:  

  1. Systemically biased sampling data can perpetuate historical inequality. For example, if AI that analyzes resumes or loan applications is trained on biased data, it can perpetuate the biases of the data that trained it.  
  2. Ethically ambiguous uses of AI are rising in prominence. One example is facial recognition software. While facial recognition could expedite airplane onboarding, it could also be used for less benign purposes. In 2020, IBM CEO Arvind Krishna sunset general-purpose facial recognition research and development, before shifting to a precision regulation approach to controlling facial recognition.
  3. Humans anthropomorphize or abuse AI.  
  4. The movie Her showed the descent of a man falling in love with a mobile OS. This is called deceptive anthropomorphism. It needs to be made clear to all who interact with AI that it is not sentient and cannot love you back, despite its human-like response. When ChatGPT is told, “I love you” it responds, “I appreciate the sentiment, but it’s important to note that I’m just a computer program created by OpenAI, so I don’t have feelings.”  
  5. Humans abuse AI and chatbots. Ruuh, Microsoft’s chatbot “received 1,239,446 messages, of which 94,392 are abusive and insulting at some level.” Luckily, the creators of Ruuh equipped the robot with warning messages and then blocks users who disregard the warning.

Cybersecurity  

AI can be used to create massive malware and phishing attacks. Because open source AI is very good at writing emails, AI can automate email-based phishing attacks with superhuman accuracy and speed.  

AI can also be used in cybersecurity. This includes IBM QRadar, which is used by Movius, Guardium, and many more cybersecurity solutions. AI-powered security solutions accelerate threat detection, expedite responses, and protect user identity.6 

An AI-generated malware called BlackMamba bypassed advanced cybersecurity technology in industry-leading Enpoint Detection and Response in an experimental project that researchers at Hyas conducted. “While the BlackMamba malware was only tested as a proof-of-concept and does not live in the wild, its existence does mean that the threat landscape for individuals and for organizations will be unequivocally changed by the use of AI.”  

AI at Movius  

We hope that this article was able to help you skim through the nascent ethical and legal concerns. AI is diverse in the scope of its applications. At Movius, we offer three AI products: TimeWize, CLARE and Arya 

  1. TimeWize is an AI-driven assistant that will help manage your time by perusing your future calendar blocks and providing recommendations on how you can be most efficient in your business life.  
  2. CLARE analyzes the feeling of a conversation and increases customer satisfaction and Net Promotor Scores (NPS) by revealing insights into the voice of the customer.  
  3. Arya is used with our product MultiLine, which adds a company-owned business line to any smartphone. Arya analyzes and troubleshoots call quality by processing about a million calls worth of telemetry every day. 

Our successes in these cutting-edge AI solutions have prompted us to do a little research about AI in law, weighing in on the global conversation.  

 

Jane Marie

 

Subscribe for the Latest Posts

Ready to Learn More?