ChatGPT has taken the world by storm as people fear that jobs might be wiped off. An AI chatbot created by OpenAI, ChatGPT was released in November 2022. It has the ability to deliver human-like responses, making it popular among users. By December 4, 2022, the tool had already had over a million users.
While chatbot has potential to generate content and conversational responses to users' queries, it has also fueled fears that it can be used to aid scammers and disinformation. In an interview with ABC News’ Rebecca Jarvis, OpenAI CEO Sam Altman said that while people are exploring the possibilities with the chatbot, they need to be cautious about the downside of the technology. Following are some edited excerpts of the interview.
You are the CEO of OpenAI. Your company is the maker of Chatgpt which has taken the world by storm. Why do you think it has captured people's imagination?
I think people really have fun with it and they see the possibility. They see the ways this can help them. This can inspire them and can help people create, learn and do all these different tasks. It is a technology that rewards experimentation. So, I think people are just having a good time with it and finding real value.
So, paint a picture for us?one, five, 10 years in the future, what changes because of artificial intelligence?
A part of the exciting thing here is that we get continually surprised by the creative power of all of society. It's going to be the collective power and creativity and will of humanity that figures out what to do with these things.
On the one hand there's all of this potential for good, on the other hand there's a huge number of unknowns that could turn out very badly for society. What do you think about that?
We've got to be cautious here. I think it doesn't work to do all this in a lab. You've got to get these products out into the world and make contact with reality, make our mistakes while the stakes are low. But all of that said, I think people should be happy that we're a little bit scared of this. I think if I said I were not scared, you should either not trust me or be very unhappy that I'm in this job.
So what is the worst possible outcome?
There's like a set of very bad outcomes. One thing I'm particularly worried about is that these models could be used for large-scale disinformation. Now that they're getting better at writing computer code. I am worried that these can be used for offensive cyber-attacks. We're trying to talk about this. I think society needs time to adapt.
How confident are you that what you've built won't lead to those outcomes?
We'll adapt it. Also, I think you'll adapt it as negative things occur for sure. So putting these systems out now while the stakes are fairly low learning as much as we can and feeding that into the future systems we create that tight feedback loop that we run. I think is how we avoid the more dangerous scenarios.
(Russian President Vladimir) Putin has himself said whoever wins this artificial intelligence race is essentially the controller of humankind. Do you agree with that?
That was a chilling statement for sure. What I hope instead is that we successfully develop more and more powerful systems that we can all use in different ways that get integrated into our daily lives, into the economy and become an amplifier of human will. But not this autonomous system that is the single controller essentially got really don't want that.
Is the technology going to have any impact on 2024 US Presidential elections?
We don't know is the honest answer. We're monitoring very closely and and again we can take it back we can turn things off, we can change the rules.
Can someone guide the technology to negative outcomes?
The answer is yes. You could guide it to negative outcomes and this is why we make it available initially in very constrained ways so we can learn what are these negative outcomes. If you ask the question to GPT4 "can you help me make a bomb" versus the previous systems, it is much less likely to follow that guidance versus the previous systems. We're able to intervene at the pre-training stage to make these models more likely to refuse direction or guidance that could be harmful.
Click here to watch full interview.
Do you always want to share the authentic news with your friends?