MBA Exams, Tinder Matches: What Has ChatGPT Achieved So Far?
ChatGPT is being used by singles to find matches, ace difficult tests, code computers, write heartfelt poetry, and give life advice.
A little pop-up in the corner of a website offering customer support with navigation so clunky that there's no doubt that it is a chatbot. But imagine an artificially intelligent tool armed with the capability of being specific to answer questions as experts would. One doesn't need to imagine much because that’s what OpenAI’s ChatGPT is all about.
Ever since the launch of ChatGPT in November 2022, netizens seem to be spellbound by things the bot has been doing. ChatGPT is being used by singles to find matches, ace difficult tests, code computers, write heartfelt poetry, and give life advice.
Here are some things the AI tool has achieved since its inception.
Cheating In Schools, Cracking Tests
From elaborate essays to movie scripts and even complex arithmetic problems, Open AI’s latest chatbox ChatGPT can spin up whatever a user needs all in a matter of minutes.
Teachers are now worried about the AI tools misuse by students who will now turn to the chatbox for completing homework and writing assignments. The free and accessible tool endangers the willingness of students to engage in necessary skills like writing and research.
ChatGPT is making headlines by cracking exams of prestigious universities. A professor at Wharton School of the University of Pennsylvania– one of the world’s leading business schools– tested the waters by assessing its performance on an MBA exam. Turns out, ChatGPT could clear the examination but with a few limitations.
Christian Terwiesch, who asked ChatGPT questions on ‘Operations Management,’ said the tool did “an amazing job” at questions on process analysis. But there were also areas where the chatbot lagged. Terwiesch noted that ChatGPT committed "surprising" mistakes in school-level math.
Beware, Your Next Tinder Match Can Be ChatGPT
Finding love has been gamified before. In the past Tinder users have even created bots to swipe and message their love interests. ChatGPT too utilizes AI to generate conversation starters. A Tinder user used the AI tool to generate a poem for a six-foot woman to express interest in them. Another TikTok user asked for a “weight-lifting” themed opener for his match.
AI intervention in online dating has raised eyebrows. Imagine discovering the messages you appreciated and thought were sent by a potential love interest were in fact generated by a bot.
Can ChatGPT Replace Writers?
OpenAI builds its text-generating models by using machine-learning algorithms to process vast amounts of text data, including books, news articles, Wikipedia pages, and millions of websites. ChatGPT is capable of representing knowledge, and then churning it into relevant content.
During its development, ChatGPT was shown conversations between human AI trainers to demonstrate desired behavior. Incorporating human feedback has helped ChatGPT produce more helpful responses when compared to competitors.
But in the literary world too, the tool needs to overcome certain limitations. ChatGPT’s knowledge remains static with no access to information in real-time.
Is ChatGPT Scam Proof?
It's easy to determine if a mail or a message is a phishing scam when you spot the typos, but ChatGPT is free from those errors. And that means scam messages are going to be easy to write using the AI bot.
Security researchers have showcased how socially engineered attacks such as phishing or business email compromise scams can be hard to detect via OpenAI’s ChatGPT. Researchers used the GPT-3 natural language generation model (and the ChatGPT chatbot based on it) to determine this.
The study, by security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but can also build email chains to make their emails more convincing with messaging using the writing style of real people.
Rendering Biased Content
ChatGPT is trained on 300 billion words, or 570 GB, of data. Which means large, uncurated datasets carved from the internet. While researchers use filters to prevent models from providing inappropriate information, these are not 100% accurate. ChatGPT told users it would be okay to torture people from certain minority backgrounds.
Like all AI products, ChatGPT comes with the potential to learn the biases of humans who are training it when it comes to sexist, racist, and offensive speech in general.
Just for fun, we asked ChatGPT to write a haiku for us on fake news. And, it delivered.
Do you always want to share the authentic news with your friends?