In recent years, advancements in artificial intelligence have led to the development of powerful language models that can understand and respond to human language. One such model is ChatGPT, created by OpenAI. ChatGPT is a state-of-the-art language model that has been trained on a massive dataset of text from the internet, allowing it to understand and respond to a wide range of human language. The model has been used in a variety of applications, from chatbots and virtual assistants to automated content generation and language translation. One of the key features of ChatGPT is its ability to continue a given text prompt in a human-like manner, a capability known as "text completion". This allows the model to generate text that is not only grammatically correct but also semantically relevant to the given context, making it a valuable tool for tasks such as content creation, summarization and question answering. Like that opening?
Well, it wasn’t written by a human.
Like hundreds of other universities worldwide, there has been much buzz surrounding Chat Generative Pre-trained Transformer – or ChatGPT – at Belmont University.
But what exactly is this cutting-edge chat bot? What is it capable of doing?
ChatGPT was launched on Nov. 30 by San Francisco artificial intelligence company OpenAI.
It immediately caught fire.
“Fire can cook food and grow the size of the human brain and fire can burn down a village,” said Belmont assistant professor of audio engineering technology Nathan Adam.
Surpassing over a million users in its first five days upon launching, ChatGPT, and AI in general, is one of the leading technologies in the world right now.
To answer what it’s capable of doing is like answering what a Category 5 hurricane can do to a glass city.
At the same time, it’s also like answering what a concrete mixer can do for a sidewalk.
Ask it to write a last-minute research paper on Thomas Jefferson.
Cool, now ask it to write that same research paper on Thomas Jefferson but make it lyrical.
It thinks for a few seconds.
You now have an original essay ready to turn in for class and a catchy song about the third U.S. president to brag about to your friends.
Pretty neat, huh?
Not everyone thinks so.
Since its public release and rapidly growing popularity, some educators are skeptical of this technology being used to challenge academic integrity.
Belmont professor and English department chair David Curtis acknowledges these concerns.
“I think there’s some concern there with the potential that this has for plagiarism, for creating texts that were not produced by a student,” he said.
Curtis, however, does not believe OpenAI’s latest craze is necessarily a bad thing.
“No, technology is good or bad in itself. It’s how it’s used,” Curtis said. “So, I think that one of the things that we always do when new technology presents itself is that we ask questions about how it can best be used for our purposes.”
ChatGPT can be used for good or evil.
Like all technology, ChatGPT is more than an agent for cheating, but also a powerful resource and tool that can be used to save time, energy and benefit the world.
Adam, for one, describes it as a “superpower.”
“If this is the free commercial or the free beta preview, then it already has the position to change the entire world. I mean, there’s so many professions that are going to be fundamentally altered and I don’t think they’ll go away,” he said.
If Adam wasn’t so open-minded and constantly up on what is happening in the tech world, he might be worried about what this means for the future of employment.
The rapid development of AI will affect nearly every profession in the world.
Media, film, law, healthcare, retail, marketing, customer service, banking.
The list goes on.
Why employ and pay humans to do something AI can do quicker?
Well, there’s one catch.
It’s not always right.
In fact, tech media website CNET recently made its way into the hot seat by publishing multiple errors written entirely by a machine.
“AI is not perfect, and when it’s not perfect, if you publish it like it is, then you lose your credibility,” Adam said.
“It’s one of those things where people like Elon Musk, or lots of other AI researchers have said this is something we need to be concerned about because if we build bias, if we build our existing biases into the data, then all the output is biased.”
Adam admits he regularly explores the likes of ChatGPT to create workflows for his classes but makes sure to go back and fact check everything the machine writes.
“If it’s otherwise a superpower, this new magical ability that we all have to just know anything and get really clear instruction on anything at any time, then you want to make sure that data is good,” he said.
Another inconvenience ChatGPT currently faces is the fact it’s so popular to the point that user capacity is regularly reached, overloading the server.
But what about the cheating concern?
Teachers and professors need to shift the way they assign writings and projects, Curtis said.
By contextualizing and making writings very specific to the course, AI cannot deliver what the instructor is looking for.
“You’re really having to think and put things into context, but it also makes it a lot harder to use technologies like this to be academically dishonest, if that’s what you’re going to do,” he said.
With the addition of ChatGPT, OpenAI has several other models that serve other purposes.
Not only can you now have coherent conversations with AI bots, but art generators such as DALL-E can create graphics, images or artwork in seconds, simply by typing in the text box what you want it to create.
As with any new advancement, ethics to prevent new technology from being used for ill-intent plays a key role in the development.
While some may be hesitant to adapt to the inevitable growth of technology, AI is not slowing down anytime soon.
As for Adam, he will continue to make use of the latest advancements, he said.
“I don’t worry about it. I do try to take advantage of it. I do try to see where it’s going so that I can enjoy the edge of that,” he said.
“I do think people will still want a person they can trust; we still need live people.” This article was written by A.J. Wuest and ChatGPT