How Might St. John’s College Fare in the Age of AI?

September 22, 2023 | By Benjamin Scott (SF25)

The public arrival of tools like OpenAI’s Chat-GPT has reshaped the ways in which we consider education. Though many institutions are struggling to keep up with AI technology and distinguish it from their students’ output, other institutions, like St. John’s, see an opportunity to modify higher education for the better. 

Could AI technology increase the importance of education models reliant on critical thinking and conversation?

The fact that education has already been so impacted by AI signals a vast set of changes on the horizon since OpenAI’s Large Language Model is still considered to be merely a research preview in the testing phase. Still, allowing the public to freely interface with such a mechanism and seeing its improvement through regular updates allows societal changes to manifest slowly. By having this rare opportunity to witness said changes in a relatively controlled manner, we can begin predicting the pivots that we, in turn, will need to make to adapt to a world restructured by this powerful new tool. As far as education is concerned, it seems reasonable to predict that the Great Books Program at St. John’s, and other programs like it, will offer huge advantages and be highly valuable in light of future AI development.

The essay and presentation-generating abilities of Chat-GPT are already well-known and pose significant problems for educators. Since Chat-GPT was modeled on GPT-3 and 3.5, students have been using it for translations, essay writing, presentation generation, and more artistic writing forms like poetry and fiction. Recent announcements about GPT-4 have boasted increases in the amount of text it can generate, allowing for longer-form works to be made, increased quality in working with multi-lingual text, and faster text generation overall, along with advances in image identification and significant improvements in scores on exams designed for humans (e.g., moving from the bottom 10th percentile to the 90th in the Uniform Bar exam). “Solutions” have been suggested, such as OpenAI finding a way to watermark AI-generated text or introducing tools to analyze and spot generated text. Still, these both will always have workarounds, and there will certainly be some sort of arms race between programs that identify generated text and programs that camouflage generated text.

This is bad news for most universities, and the solutions of watermarking or detecting generated text with another program demonstrate that individuals desire a solution that leaves the fundamental structure and method of modern pedagogy intact. However, the number of students  already having success raising their marks with the research preview of Chat-GPT—and within so many different fields of study—clearly indicates that this is a problem hitting the very core of education.

Large Language Models like GPT-4 are fundamentally dynamic data sets. They are trained with large amounts of text, and based on that text, they are then able to take in user prompts and reply with a statistically distilled body of text that mimics the style and content relevant to the topic given. Now, humans allegedly do not work in this way and do things like reflection, meaning-making, and relating to their interlocutors to produce responses to prompts. But it must be admitted that a style of education reliant upon students taking in textbooks and lectures and then producing a body of text or an exam to prove how well they absorbed the material places the student uncomfortably close to resembling a dynamic data set.

Now that we have a technology to fill this role precisely, we cannot at present differentiate between the average student and Chat-GPT. As a result, the validity and value of many college degrees are thrown into question: how are we able to use a degree to determine the credibility of a scholar if it is now completely possible that they read and wrote nothing? What is the purpose of even having a degree based on memorizing and reciting facts when we now have a tool that can do it better than humans without having to specialize in any one field?

This seems, at first, like a tragic situation—until one considers how it could fundamentally improve how we learn. To rescue higher education from losing the value of all degrees earned from 2024 and onwards, its structure must be fundamentally changed to produce verified students who are able to work in ways and deliver value that Chat-GPT does not. In the post-pandemic world, online education has only grown and created a passive mode of learning that mostly consists of reading PDF textbooks, attending online lectures, communicating over video chat and email, and submitting assignments over online portals. Now, enter Chat-GPT, and we see how this lack of human-to-human interaction can be manipulated to forge academic work undetected.

Luckily, the changes we must make to education do not necessarily need to be anything radically new or unheard-of. Colleges already exist with a model more resistant to the corrosive effects of tools like Chat-GPT. St. John’s College in Santa Fe and Annapolis has a discussion-based model where students first read primary texts from the Western liberal arts and then go into class every day to have face-to-face discussions. Using a conversational model for the classroom carries two great benefits for the current times: First, it encourages students to think critically and dynamically about the material and not just memorize facts by rote. Second, it demands the students demonstrate what they know in real-time.

The Program also has a writing dimension, but this, too, is protected by the fact that students are expected to offer oral defenses of their papers to tutors in person. Instead of submitting a paper online and receiving marks without further interaction, the students must show familiarity with what they wrote and the material they wrote about. Funnily, it seems that the most effective way to work around the problem of AI-generated text is simple: universities only need to talk to their students more and assign fewer standardized exams and essays.

Not only does the discussion-based model allow students’ knowledge to be better verified, but as stated above, it permits students to develop skills around critical thinking rather than rote memorization. Memory is an important faculty, but since we have designed more and more technologies around offloading memory, we are in a situation where a student’s skills in this regard might matter less and less. Facts can be retrieved with speed and ease via digital dictionaries, encyclopedias, and forums, and now the apex of these tools are being developed. But when knowledge is stored as data and merely recited, there is little progress being made or new ideas being generated. Focusing on synthesizing, analyzing, and using this data to produce new thoughts and ideas seems much more valuable.

In this way, St. John’s College has a clear advantage over other colleges and universities. This bodes well for St. John’s, and if other institutions want to stay ahead of the curve, they have a model to mimic. Those who will refuse to adapt by shifting online classes to seminar discussions and focusing on critical thinking rather than rote memorization may find themselves handing out increasingly empty degrees—and with their value and institutional integrity called into question, they may find it increasingly difficult to survive. Instead of putting hope in AI companies and third parties to save universities with watermarks or detection software, it would be much wiser for them to develop their own solutions and change their structure to retain their reputational value and educational quality for years to come.