BOOM

Trending Searches

    SUPPORT
    BOOM

    Trending News

      • Fact Check 
        • Fast Check
        • Politics
        • Business
        • Entertainment
        • Social
        • Sports
        • World
      • Law
      • Explainers
      • News 
        • All News
      • Decode 
        • Impact
        • Scamcheck
        • Life
        • Voices
      • Media Buddhi 
        • Digital Buddhi
        • Senior Citizens
        • Videos
      • Web Stories
      • BOOM Research
      • BOOM Labs
      • Deepfake Tracker
      • Videos 
        • Facts Neeti
      • Home-icon
        Home
      • About Us-icon
        About Us
      • Authors-icon
        Authors
      • Team-icon
        Team
      • Careers-icon
        Careers
      • Internship-icon
        Internship
      • Contact Us-icon
        Contact Us
      • Methodology-icon
        Methodology
      • Correction Policy-icon
        Correction Policy
      • Non-Partnership Policy-icon
        Non-Partnership Policy
      • Cookie Policy-icon
        Cookie Policy
      • Grievance Redressal-icon
        Grievance Redressal
      • Republishing Guidelines-icon
        Republishing Guidelines
      • Fact Check-icon
        Fact Check
        Fast Check
        Politics
        Business
        Entertainment
        Social
        Sports
        World
      • Law-icon
        Law
      • Explainers-icon
        Explainers
      • News-icon
        News
        All News
      • Decode-icon
        Decode
        Impact
        Scamcheck
        Life
        Voices
      • Media Buddhi-icon
        Media Buddhi
        Digital Buddhi
        Senior Citizens
        Videos
      • Web Stories-icon
        Web Stories
      • BOOM Research-icon
        BOOM Research
      • BOOM Labs-icon
        BOOM Labs
      • Deepfake Tracker-icon
        Deepfake Tracker
      • Videos-icon
        Videos
        Facts Neeti
      Trending Tags
      TRENDING
      • #Operation Sindoor
      • #Pahalgam Terror Attack
      • #Narendra Modi
      • #Rahul Gandhi
      • #Waqf Amendment Bill
      • #Arvind Kejriwal
      • #Deepfake
      • #Artificial Intelligence
      • Home
      • News
      • Chatbots Could Be Used For...
      News

      Chatbots Could Be Used For Large-Scale Disinformation: ChatGPT Founder Sam Altman

      In an interview with ABC News the Open AI CEO said while chatbots have potential to be helpful, used to aid scammers and spread disinformation.

      By - BOOM Team |
      Published -  24 March 2023 5:17 PM IST
    • Boomlive
      Chatbots Could Be Used For Large-Scale Disinformation: ChatGPT Founder Sam  Altman

      ChatGPT has taken the world by storm as people fear that jobs might be wiped off. An AI chatbot created by OpenAI, ChatGPT was released in November 2022. It has the ability to deliver human-like responses, making it popular among users. By December 4, 2022, the tool had already had over a million users.

      While chatbot has potential to generate content and conversational responses to users' queries, it has also fueled fears that it can be used to aid scammers and disinformation. In an interview with ABC News’ Rebecca Jarvis, OpenAI CEO Sam Altman said that while people are exploring the possibilities with the chatbot, they need to be cautious about the downside of the technology. Following are some edited excerpts of the interview.

      You are the CEO of OpenAI. Your company is the maker of Chatgpt which has taken the world by storm. Why do you think it has captured people's imagination?

      I think people really have fun with it and they see the possibility. They see the ways this can help them. This can inspire them and can help people create, learn and do all these different tasks. It is a technology that rewards experimentation. So, I think people are just having a good time with it and finding real value.

      So, paint a picture for us?one, five, 10 years in the future, what changes because of artificial intelligence?

      A part of the exciting thing here is that we get continually surprised by the creative power of all of society. It's going to be the collective power and creativity and will of humanity that figures out what to do with these things.

      On the one hand there's all of this potential for good, on the other hand there's a huge number of unknowns that could turn out very badly for society. What do you think about that?

      We've got to be cautious here. I think it doesn't work to do all this in a lab. You've got to get these products out into the world and make contact with reality, make our mistakes while the stakes are low. But all of that said, I think people should be happy that we're a little bit scared of this. I think if I said I were not scared, you should either not trust me or be very unhappy that I'm in this job.

      So what is the worst possible outcome?

      There's like a set of very bad outcomes. One thing I'm particularly worried about is that these models could be used for large-scale disinformation. Now that they're getting better at writing computer code. I am worried that these can be used for offensive cyber-attacks. We're trying to talk about this. I think society needs time to adapt.

      How confident are you that what you've built won't lead to those outcomes?

      We'll adapt it. Also, I think you'll adapt it as negative things occur for sure. So putting these systems out now while the stakes are fairly low learning as much as we can and feeding that into the future systems we create that tight feedback loop that we run. I think is how we avoid the more dangerous scenarios.

      (Russian President Vladimir) Putin has himself said whoever wins this artificial intelligence race is essentially the controller of humankind. Do you agree with that?

      That was a chilling statement for sure. What I hope instead is that we successfully develop more and more powerful systems that we can all use in different ways that get integrated into our daily lives, into the economy and become an amplifier of human will. But not this autonomous system that is the single controller essentially got really don't want that.

      Is the technology going to have any impact on 2024 US Presidential elections?

      We don't know is the honest answer. We're monitoring very closely and and again we can take it back we can turn things off, we can change the rules.

      Can someone guide the technology to negative outcomes?

      The answer is yes. You could guide it to negative outcomes and this is why we make it available initially in very constrained ways so we can learn what are these negative outcomes. If you ask the question to GPT4 "can you help me make a bomb" versus the previous systems, it is much less likely to follow that guidance versus the previous systems. We're able to intervene at the pre-training stage to make these models more likely to refuse direction or guidance that could be harmful.

      Click here to watch full interview.






      Tags

      ChatGPTFacebook
      Read Full Article
      Next Story
      Our website is made possible by displaying online advertisements to our visitors.
      Please consider supporting us by disabling your ad blocker. Please reload after ad blocker is disabled.
      X

      Subscribe to BOOM Newsletters

      👉 No spam, no paywall — but verified insights.

      Please enter a Email Address
      Subscribe for free!

      Stay Ahead of Misinformation!

      Please enter a Email Address
      Subscribe Now🛡️ 100% Privacy Protected | No Spam, Just Facts
      By subscribing, you agree with the Terms & conditions and Privacy Policy connected to the offer

      Thank you for subscribing!

      You’re now part of the BOOM community.

      Or, Subscribe to receive latest news via email
      Subscribed Successfully...
      Copy HTMLHTML is copied!
      There's no data to copy!