Image via Yahoo
***
In our rapidly evolving technological landscape, artificial intelligence (AI) has prevailed as one of the most controversial tools of the modern age. 55% of Americans report using artificial intelligence tools on a regular basis. When picturing modern AI applications, uses like customer service chatbots or social media algorithms are the most apparent, as they have become seamlessly integrated into everyday life. AI, however, is used in some of the most unexpected places, ranging anywhere from agriculture fields to finance applications. Although AI is a nonhuman-based technology, it relies heavily on the years of information and culture that have developed throughout history. Artificial intelligence is bringing human-based creativity and innovation to an end, evident in its unethical disregard for intellectual property.
In its simplest definition, artificial intelligence is a technology that simulates human learning through computers and machines. Additionally, AI comes in a variety of forms. A few examples include machine learning, computer vision, and perhaps the most controversial, generative AI.
Generative AI, or GAI, refers to a form of artificial intelligence that has the ability to generate text, images, and content based on a collection of data. Types of content that come from generative AI range from texts to images to even videos. This has completely reshaped the way content is produced, as bots are now able to both ideate new content and even produce the content itself. Latanya Sweeney, Professor of the Practice of Government and Technology at the Harvard Kennedy School predicts that in the future, “90% of content will no longer be generated by humans. It will be generated by bots.” The most well-known and accessible GAI platform available is ChatGPT, launched by AI research company OpenAI in 2022. The site has an estimated 100 million weekly users, with its biggest demographic being people ages 12 to 27. Other major generative AI platforms include Claude by US-based AI startup Anthropic, Gemini by Google, and Copilot by Microsoft. To the average user, it may seem that these technologies can produce information about virtually everything in little to no time. This information, however, originates from existing data.
GAI developers will provide libraries of data, including books, journals, websites, and more to their AI models. Bots can use this information to produce what seems to be well-thought-out responses to users. Though AI appears to operate independently of humans, it relies on them entirely. Without the centuries of human-made work that is fed into these platforms, it would be nearly impossible for GAI developers to program adequate chatbots. The process of feeding this information into AI programs, however, is unethical and offends notions of intellectual property protection.
Thomson Reuters Corporation is a Canadian-American information technology conglomerate that provides legal and economic technological support to businesses. In May 2020, the company brought a lawsuit against ROSS Intelligence for training their AI platform with content from Thomson Reuters’ research platform, Westlaw. Westlaw acts as a database for legal professionals, with nearly 6,000 active customers. Ross Intelligence is a direct competitor to Westlaw, acting as a source of legal research. Their AI system in particular was programmed to answer legal questions. Ross fed documents from Westlaw’s database into their system, resulting in Reuters filing for copyright infringement. Judge Stephanos Bibas delivered a summary judgment in favor of Ross on the basis of transformativeness. In internet-related works, the latter work is considered “transformative” if it provides the public with a previously unavailable benefit. Ross’ AI platform gave the public new access to previously inaccessible information, and their use of AI delivered the information in a new way. Thomson Reuters Enter Centre GmbH et al v. ROSS Intelligence Inc. (2020) was the first of many major lawsuits regarding AI and copyright law.
In December of 2023, the New York Times filed a complaint against OpenAI. OpenAI has developed some of the most popular AI platforms, including DALL-E, Sora, and ChatGPT. The New York Times also filed against Microsoft, which launched the AI chatbot Copilot, in November of 2023. Similar to Thomas Reuters Corporation, The New York Times alleged that their copyrighted content was used to train the above-named AI large language models. It was discovered that ChatGPT and Copilot were directly reciting NYT content when prompted by users. Because intellectual property law has not yet caught up with the rapidly evolving technological landscape, it is extremely difficult to navigate these types of cases. This case has not yet been decided and is one of several active lawsuits between the people behind creative works and artificial intelligence developers that will set a precedent for future cases.
Reuters v Ross and NYT v OpenAI (2024) make the ethical implications between artificial intelligence and protected property clear. Inadequate governance of artificial intelligence leads to the limitation of human creativity. Human creators are drowned out by artificially generated content, and some are even being pushed out of the market completely. This is apparent in the massive decrease in creators over the last two decades. Since the 2000s, AI has eliminated an estimated 1.7 million jobs. Additionally, some human creators may start choosing to opt out of the market. The creation of new content is no longer rewarding, as there is no way to efficiently protect one’s own work.
A well-known example of this is the recent internet sensation of AI-generated music. TikTok is a social media platform on which users can create short-form content, adding music of their choice. In January of 2024, Universal Music Group chose to remove all music from their artists from the app. This stemmed from TikTok’s failure to protect UMG artists from AI-generated songs on their platform, mimicking the voices of some of UMG’s biggest artists. Because of the unregulated nature of artificial intelligence, the creative work of hundreds of artists has been limited.
The current state of AI is only the beginning of years of innovation and advancements to come. MIT Communications and Program Manager Carolyn Blais predicts that AI may be able to operate with the same level of intelligence as humans within the next 45 years. Many believe that AI is the future, but some theorize that it may be the end, as it has slowed the development of culture and creativity. A lack of protection for intellectual property in the face of artificial intelligence is detrimental to not only the creative community but society at large. Creatives will be less inclined to continue producing work if they aren’t acknowledged for their efforts. This halt in creative works will prohibit the continuation of cultural development.
With any new venture, potential consequences must be considered. Artificial intelligence is here to stay but must be managed properly to avoid potentially harmful outcomes. Finding the balance between new technology and the preservation of creativity and culture is imperative if we, as a society, want to continue making new technologies safely and productively.
***
This article was edited by Herman Singh and Cameron Ma.
