Image via Christopher Thornock
***
“Of course! What do you need help with?”
This considerate attitude is not foreign to many college students today. When you ask ChatGPT a question, it responds kindly to the user, willing to assist with any issue they might encounter.
It is a fact that Artificial Intelligence, or more specifically, large language models (LLMs), have made their way into higher education. To what extent should this technology be harnessed to promote further education, and to what extent should it be limited to protect academic integrity?
First, it is important to define the difference between artificial intelligence (AI), generative AI, and large language models (LLMs). AI is an umbrella term that covers many different types of models and programs. It is able to perform tasks that usually require human intelligence to complete, such as speech recognition, natural language processing (NLP), text generation and translation, video, sound, and image generation, decision-making, planning, and more.
Generative AI is able to build models that can generate new data similar to the data on which it has been trained. It is a more specific version of AI, as it is able to generate human-like text, answer questions, write essays, and create stories. In other words, it is able to emulate human creativity.
LLMs are a specific application of generative AI, specifically designed for tasks revolving around natural language generation and comprehension. For tasks like summarizing college-level texts or writing a paper for a university-level course, LLMs can take whatever prompt is imputed and generate a response.
Currently, as seen in a study conducted by the Digital Education Council, 86% of bachelor, masters, and doctorate students stated that they have used artificial intelligence in their studies. A staggering 24% of the students reported that they use AI daily, and 54% state that they used it daily or weekly.
On the other hand, 58% of students disclosed that they did not feel as though they had sufficient AI knowledge and skills, and 48% added that they felt inadequately prepared for an AI-enabled workforce.
Clearly, students are aware of the power that AI holds, but are disappointed by both its lack of implementation and its prevalent use by their schools. Universities and colleges are struggling to keep up with the rapidly changing technologies of the modern day. Some higher education institutions are trying to utilize AI for the analysis of student data in an effort to better tailor the learning experience to individual students’ needs.
However, as pointed out by the Assistant Dean for Programs and Assessment at Rutgers University, Sharon Stoerger, there are many concerns surrounding the ethics of the widespread use of generative AI and models like ChatGPT in college. Not only are there biases in the data and algorithms that these models are based on, resulting in inequalities in education, but the use of these technologies can limit higher-order thinking in students, restricting the future generation’s ability to solve problems and think creatively and critically.
Additionally, generative AI has been trained via content taken from creators without their consent. Therefore, using a response that is generated by AI can be genealogically traced to the works of writers and artists who have uploaded the work anywhere on the internet, which raises plagiarism concerns.
Not only that, but the computational power used to run AI systems has vast environmental impacts, such as high energy consumption and increased carbon emissions. The data centers where AI models are trained use large amounts of water for cooling, which is very important to consider when America is currently being plagued by droughts and wildfires.
It is likely that there will be courses in the future instructing students on how to apply generative AI and LLMs in their work. It is predicted that within the next decade, AI and big data will become one of the most common careers in the country. Clearly, these skills must be taught to students.
One of the chief principles that should be stressed when developing these courses should be on how to use this technology ethically while promoting academic integrity. These courses should teach students how to use AI to help further their careers and enhance their own skills. They should also educate students on the responsible use of AI, as well as how to recognize when it has been used. Perhaps most importantly, these courses should teach students how to practice restraint when using these tools and try to instill confidence in their own skills and abilities.
In the end, it will be difficult to find a way to balance educating people on the strengths of AI and preserving the human value in creative works. But being able to maintain students’ relationships with their professors and peers while still advocating for the implementation of new technological skills will be incredibly important, now more than ever. It is up to all humans to preserve our human ingenuity in times where it is so often undervalued.
As ChatGPT concludes in all of its responses: “Let me know if I can help with anything else!”
***
This article was edited by Emily Caro.