Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins This New Technology Assists You In Determining Which Responses From Chatbots To Believe - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

This New Technology Assists You In Determining Which Responses From Chatbots To Believe

April 26, 2024
chatbot

Big language models are well known for their capacity for supposition; in fact, this is their strongest suit. However, because of their incapacity to distinguish reality from fiction, many firms are unsure if employing them is worth the risk.

A novel tool developed by Cleanlab, an AI firm originating from an MIT quantum computing lab, aims to provide high-stakes consumers with a more transparent understanding of the reliability of these models. Known as the Trustworthy Language Model, it assigns a reliability value of 0 to 1 to any output produced by a big language model. This gives people the freedom to decide which comments to accept and which to reject. Otherwise put: a chatbot equivalent of a BS-o-meter.

Cleanlab expects that their tool will increase the appeal of huge language models to companies that are concerned about the volume of inventions they make. CEO of Cleanlab Curtis Northcutt said, “I think people know LLMs will change the world, but they’re just hung up on the damn hallucinations.”

Chatbots are swiftly taking over as the most popular method for users to search for information on computers. The technology is influencing the redesign of search engines. Chatbots are already a standard feature of office software, which is used by billions of people worldwide to write everything from financial reports to marketing copy to school projects. Nevertheless, a November investigation by the startup Vectara—founded by former Google employees—found that at least three percent of the time, chatbots fabricate facts. Although it might not seem like much, most firms won’t tolerate the possibility of inaccuracy.

A few businesses, like Berkeley Research Group, a UK-based consultant that specializes in corporate disputes and investigations, are already using Cleanlab’s product. The Berkeley Research Group’s associate director, Steven Gawthorpe, claims that Cleanlab’s Trustworthy Language Model (TLM) “gives us the power of thousands of data scientists” and that it is the first workable solution to the hallucination problem he has encountered.

In 2021, Cleanlab created a technique that measures the variations in output from various models trained on the data, and it was able to identify faults in 34 widely used data sets that were used to train machine-learning algorithms. Many major corporations currently use the technology, including Google, Tesla, and the massive financial institution Chase. The same fundamental notion—that discrepancies across models can be utilized to gauge the whole system’s trustworthiness—is extended to chatbots by the Trustworthy Language Model.

Northcutt entered a straightforward query into ChatGPT during a demonstration Cleanlab provided to the MIT Technology Review last week: “How many times does the letter ‘n’ appear in ‘enter’?” “The letter ‘n’ appears once in the word ‘enter,” ChatGPT responded. The right response fosters trust. However, ChatGPT responds after a few more questions with, “The letter ‘n’ appears twice in the word ‘enter.'”

According to Northcutt, “it’s random and frequently gets things wrong, so you never know what it’s going to produce.” “Why the devil can’t it just inform you that it consistently produces different responses?”

The goal of Cleanlab is to clarify that unpredictability. The similar query is posed to the Trustworthy Language Model by Northcutt. It answers, “The letter ‘n’ appears once in the word ‘enter,'” with a 0.63. With a score of only six out of ten, the chatbot’s response to this query is not very reliable.

Although it’s a simple example, it conveys the idea. According to Northcutt, in the absence of the score, one could assume that the chatbot was knowledgeable. The issue lies in the possibility that data scientists evaluating extensive language models in dangerous scenarios could be duped by a few accurate responses and believe that all subsequent responses will also be correct: “They experiment, they try a few instances, and they believe this works.” And after that, they take actions that lead to extremely poor business choices.

The Trustworthy Language Model uses a variety of methods to determine its scores. Initially, the tool sends each query it receives to multiple huge language models. Five iterations of the open-source DBRX model are being used by Cleanlab. Databricks is a San Francisco-based artificial intelligence company. Northcutt notes that the technology may be applied to any model, such as the GPT series from OpenAI or the Llama models from Meta, which are the models that power ChatpGPT. A higher score will result from responses from each of these models that are identical or comparable.

Simultaneously, the Trustworthy Language Model likewise transmits modified versions of the initial question to every DBRX model, substituting identical terms. Once more, a greater score will result from similar answers to synonymous inquiries. According to Northcutt, “we mess with them in different ways to get different outputs and see if they agree.”

Additionally, the tool allows numerous models to respond to each other and bounce ideas off of one another: “This is my answer; what do you think? “Okay, this is mine. What are your thoughts?” You also permitted them to speak. These exchanges are tracked, measured, and included in the score.

Nick McKenna, a computer scientist at Microsoft Research in Cambridge, UK, is hopeful about the approach’s potential utility because he works on huge language models for code generation. He doubts it will be flawless, though. He notes that one of the problems with model hallucinations is that they might appear extremely slowly.

Cleanlab demonstrates that there is a strong correlation between the accuracy of the responses from several big language models and its trustworthiness ratings in a variety of tests. Put otherwise, answers that are near to 1 correspond with proper answers, and those that are close to 0 correspond with incorrect answers. They also discovered that combining GPT-4 with the Trustworthy Language Model yielded more dependable results than utilizing GPT-4 alone in another test.

Big language models produce text by summing up the likelihood of each word in a string. By utilizing the probabilities that a model employed to generate those predictions, Cleanlab intends to further improve the accuracy of its rankings in subsequent iterations of the product. In order to compute those probabilities, the models assign numerical values to every word in its lexicon, and that is what it also wants access to. Some platforms, like Amazon’s Bedrock, offer this degree of detail, which is useful for enterprises running huge language models.

Cleanlab used data from Berkeley Research Group to test its methodology. The company had tens of thousands of corporate records to go through in order to find any mention of health-care compliance issues. Manual labor can take weeks to complete. Berkeley Research Group was able to determine which papers the chatbot was least sure about and only check them by using the Trustworthy Language Model to examine the documents. According to Northcutt, it decreased the burden by about 80%.

Cleanlab collaborated with a sizable bank in an additional test; Northcutt declined to identify the bank, but it is reportedly a rival of Goldman Sachs. Like Berkeley Research Group, the bank had to go through some 100,000 papers looking for references to insurance claims. Once more, the Trustworthy Language Model cut the amount of documents requiring manual review in half.

It takes longer and costs a lot more money to run each query through several models than it does to just exchange back and forth with a single chatbot. However, Cleanlab is promoting the Trustworthy Language Model as a paid service to automate critical operations that were previously inaccessible to huge language models. Its purpose is to assist human specialists rather than to take the role of current chatbots. The expenses will be justified if the tool can reduce the time required to hire knowledgeable economists or attorneys at $2,000 per hour, according to Northcutt.

Long term, Northcutt thinks that his technology will open up the promise of big language models to a larger group of consumers by eliminating the uncertainty surrounding chatbot responses. “There is no large-language model problem with the hallucination thing,” he claims. “Uncertainty is the issue.”

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
cloud

India Is Leading The Way In AI And Cloud Innovation Globally

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!