Concerned about ChatGPT universities are beginning to review teaching methods

Concerned about ChatGPT, universities are beginning to review teaching methods

Antony Aumann, a professor of philosophy at Northern Michigan University, was evaluating student essays in his class on world religions last month when he came across “by far the best text in the class.” The work analyzes the morality of the burqa ban with clean paragraphs, correct examples and rigorous arguments. Aumann immediately smelled something suspicious.

He called the student over to ask if he had written the essay himself. The student admitted that he had used ChatGPT, a chatbot that provides information, explained concepts, and generated ideas in simple sentences — and that in this case he had written the student’s work.

Startled by the discovery, Aumann decided to change the way essays in his classes are written this semester. He intends to require students to write first drafts of texts in the classroom, using browsers that monitor and limit computer activity.

You must justify any change from later versions of the texts. Aumann, who is considering perhaps giving up essays in subsequent semesters, also plans to include ChatGPT in her classes and ask students to rate the chatbot’s responses. “What’s going to happen in the classroom isn’t going to be ‘Here are some questions let’s talk this over between humans,'” he said, but “something like ‘And what does this alien robot think of the question?'”.

Across the country, professors like Aumann, department heads, and university administrators are beginning to reevaluate teaching practices in response to ChatGPT, which will result in potentially massive disruption to teaching and learning. Some professors are completely redesigning their courses, incorporating changes that include more oral exams, group work, and activities that require handwriting rather than typing.

The initiatives are part of a realtime effort to deal with a new wave of technology known as generative artificial intelligence. Launched in November by the OpenAI artificial intelligence laboratory, ChatGPT is at the forefront of this wave. In response to brief requests, the chatbot generates text that is surprisingly articulate and nuanced, so people use it to write love letters, poetry, fan fiction — and their schoolwork.

The news is causing confusion at some high schools, where teachers and administrators are struggling to tell if students are using the chatbot to do their homework. To prevent fraud, some public school networks, including those in New York and Seattle, have banned the tool on their WiFi networks and on school computers. But students have no problem bypassing the ban and accessing ChatGPT.

In higher education, colleges and universities have been reluctant to ban the instrument of artificial intelligence. Administrators doubt that a ban would work and do not want to violate academic freedom. Because of this, the teaching method of teachers is changing.

Out there

Receive a weekly selection of the world’s most important events in your email; open to nonsubscribers.

“We’re trying to establish general policies that enforce a teacher’s authority to lead a class,” rather than attacking specific methods of cheating, said Joe Glover, executive director of the University of Florida. “This will not be the last innovation we have to deal with.”

This is especially true as generative AI is still in its infancy. OpenAI plans to release another tool soon, GPT4, which will be better than previous versions at generating text.

Google built rival chatbot LaMDA, and Microsoft is discussing a $10 billion investment in OpenAI. Some Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.

An OpenAI representative said the company recognizes that its programs could be used to fool people and is developing technology to help people identify text generated by ChatGPT. ChatGPT has jumped to the top of the agenda for many universities. Administrators form task forces and encourage debates involving their entire institutions to decide how to respond to the tool. Many of the proposed guidelines will be adapted to the technology.

At institutions such as George Washington University in Washington, Rutgers University in New Brunswick, New Jersey, and Appalachian State University in Boone, North Carolina, faculty are reducing the number of jobs they are expected to do from home students, as a result of the pandemic the most used scoring method, but now seems vulnerable to chatbots. Now they opt for classroom, manuscript and group work, as well as oral exams.

No more instructions like “write five pages on topic x”. Instead, some professors prepare questions they hope will be too complicated for chatbots to answer and ask students to write about their own lives and current events.

Sid Dobrin, director of the English department at the University of Florida, said that “students use ChatGPT to submit plagiarism because assignments can be plagiarized.” Frederick Luis Aldama, director of liberal arts at the University of Texas at Austin, said he intends to teach newer or niche texts that ChatGPT may have less information about. For example, instead of “A Midsummer Night’s Dream” you decide on the early sonnets by William Shakespeare.

For him, the chatbot can motivate people “who are interested in primary canonical texts to get out of their comfort zone and search for things that are not online”.

If the new methods they’ve adopted can’t prevent plagiarism, Aldama and other professors said they intend to introduce stricter expectations and evaluation criteria. Today it is no longer enough for an essay to have a thesis, introduction, additional paragraphs and conclusion. “We have to improve our game,” said Aldama. “The imagination, creativity and innovative analysis that normally deserves an A grade must now be present in the work that deserves a B grade.”

Universities also want to educate students about the new tools. The University at Buffalo in New York and Furman University in Greenville, South Carolina plan to embed a discussion of AI tools in required courses that introduce concepts such as academic integrity to freshman students. “We need to build in a scenario so the students can see a concrete example,” said Kelly Ahuna, Buffalo’s director of academic integrity. “Rather than spotting problems when they arise, we want to prevent them from occurring.”

Other universities are trying to set limits on the use of AI. Washington University in St. Louis and the University of Vermont, Burlington are revising their academic integrity policies to include generative AI in their definitions of plagiarism.

John Dyer, vice president for admissions and educational technologies at Dallas Theological Seminary, said the language used in the seminary’s code of honor “already feels a little archaic.” He intends to update the definition of plagiarism to include: “The use of text written by a generation system to pass it off as text itself (e.g. inserting a prompt into an artificial intelligence tool and the Use of the result in a scientific paper)”.

Abuse of AI tools is unlikely to go away, which is why some faculties and universities have said they intend to use detectors to eradicate this activity. Plagiarism detection service Turnitin said it will start incorporating more AI identification elements this year, including ChatGPT. More than 6,000 professors from Harvard, Yale, Rhode Island and other universities have signed up to use the GPTZero program, which promises to instantly recognize AIgenerated text.

The information comes from Edward Tian, ​​creator of the program and a senior at Princeton University.

Some students find it helpful to use AI tools to learn. Lizzie Shackney, 27, who studies law and design at the University of Pennsylvania, started using ChatGPT to generate research paper ideas and debug coding problem sets. “There are subjects that they want you to share and not have unnecessary work on,” she said, speaking about her computer science and statistics classes. “Where my brain is useful is in understanding the meaning of the code.”

But she has fears. According to her, ChatGPT sometimes explains ideas and misquotes sources. The University of Pennsylvania hasn’t adopted any rules for using the tool, so they don’t want to use ChatGPT if the university bans it or considers it a hoax.

Other students don’t have the same concerns, sharing on forums like Reddit that they’ve submitted papers written and solved through ChatGPT — and sometimes even doing so for fellow students. The hashtag #chatgpt has been viewed over 578 million times on Twitter. Users share videos of the tool writing academic papers and solving coding problems. A video shows a student copying and pasting a multiplechoice test into the tool, with the caption: “I don’t know about you, but I will let ChatGPT take my final exams. Have fun with your studying!”