Chat GPT
Forget about whether ChatGPT can work. The question is should it be doing your work? Image Credit: Shutterstock

Imagine there was a tool that could take your unplanned, incoherent, and unorganized thoughts and translate them into meticulously researched and eloquent writing. Too good to be true right? Well, that’s because it is.

ChatGPT was created by Open AI, a San Francisco based company as a large language model (LLM) where once you insert extensive volumes of data, you can sit back and watch it accurately predict what word should follow in a specific sentence. The first version of GPT (Generative Pre-Trainer Transformer) was released in 2018 with components consisting of embedding algorithms, positional encoding, self-attention mechanism and feed-forward neural network.

GPT-2 quietly followed a year later, but it was the release of GBT-3 that is continuing to cause quite the stir. According to Stanford University, GPT-3 is a hundred times bigger than its predecessor, comprises 175 billion parameters, and is trained on 570 gigabytes of text. In layman’s terms, it can transfer years of methodical learning and queries into immediately concise and targeted responses.

In Gen Z lingo, it is the GOAT of all large language models.

All in a tizzy over chats

Over the past couple of months, this advanced AI powered chatbot has had computer programmers, researchers, entrepreneurs, and even high school students are queuing up to test the efficacy of this natural language processing tool. Its revolutionary ability to interact with the user and deliver answers in conversational dialogue form is being used to complete an array of detailed tasks, ranging from editing software programs to explaining scientific concepts.

On a more relatable issue—writing college essays.

Let’s get real. Even the finest of human specimens can be persuaded to take the easy way out, especially when the solution is right in front of their eyes and can be executed in full form with just a few clicks on a keyboard. This shortcut to success becomes especially appealing during times of high stress i.e. college applications.

Anyone can access this chatbot free of cost just by visiting Open AI’s website. The homepage, bathed in hues of calming lavender, invites the user to test out its skills and provide any necessary feedback. No points for guessing what happens next.

After a brief message assuring you that the connection is secure, you wait with bated breath to see what form of literary magic will appear.

‘ChatGPT is at capacity right now.’

Extending the conversation

Clearly there are a number of other curious parties besides you. As a primitive measure not to lose your interest, it will then go on to provide you with a little taster of the tool’s capabilities.

Enter sample prompt – ‘Explain the status of ChatGPT as a pirate.’

It’s like watching an artist paint a masterpiece on a blank canvas. The words just keep coming forth in perfect harmony. Beginning with an appropriate ‘Ahoy mateys’, the short paragraph is sprinkled with just the right mention of ‘scurvy dogs’ and ‘swashbucklin’ scallywags’ to make even the likes of Captain Jack Sparrow feel at home. Yet, something feels amiss. There is a lack of personal touch.

Therein lies the major problem with chatbots. Humans are built with an emotional quotient which is difficult to imitate for any other species, let aloneAI. When a college requires a student to share their identity or recount an obstacle or discuss an accomplishment, how is a machine going to convey the accuracy of that particular student’s feelings and emotions?

Passed the exam, but…

Recently, many news channels excitedly broadcasted how ChatGPT passed an MBA exam in a Wharton study. However, in an article in ‘The Independent’, the Ivy League professor in charge of the project also emphasizes the model’s flaws in its requested task, specifically how it ‘makes surprising mistakes in relatively simple calculations at the level of 6th Grade Math’.

Although it is possible for humans to detect errors in content generated by AI, educational institutions and research organizations are realizing the need for AI detection software. American company Hugging Face has already developed a feature called, Openai detector to verify if the text is written by a human or through an AI-based tool.

Soon more will follow and these tools will be integrated into the process by the next application season.

We are in the midst of an AI boom, which is simplifying significant complications as we speak, simultaneously however, it is also diminishing the importance of diversity in language and representation of real-time experiences. Bearing this in mind, maybe we can use ChatGPT for inspiration, but let’s not use it for application.

A ‘personal’ statement

The vast majority of leading US universities, including all institutions in the Ivy League, require students to draft a 650 word personal statement. It should be just that - personal. They require students to answer additional short answer questions and shorter essays exploring the applicant’s passions, interests, and motivations.

When using ChatGPT to draft these responses, it becomes painfully clear that using this service for this purpose is not only unethical, but counterproductive. The output simply is not that good when exploring prompts that ask students to examine the most formative and deeply personal experiences that make them unique and uniquely human.

Students who opt to use ChatGPT to craft these responses are not only in clear violation of individual universities’ and Common App’s honor code, they are also doing themselves a disservice by not truly deep-diving into who they are.

Cutting corners is never advisable, but in the case of using ChatGPT to write your university essays, the consequences could be simply catastrophic.