Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins Machine Learning Is On The Verge Of Triggering ‘Reproducibility Crisis - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

Machine Learning Is On The Verge Of Triggering ‘Reproducibility Crisis

July 29, 2022
Machine-learning

A “Reproducibility Crisis” is about to be brought on by machine learning.

Researchers are increasingly using machine learning as a technique to generate predictions based on patterns in their data in fields ranging from political science to biology. However, a couple of academics at Princeton University in New Jersey believe that many of these studies’ assertions are likely to be exaggerated. They hope to raise awareness of a “brewing reproducibility issue” in machine-learning-based sciences.

According to Sayash Kapoor, a machine-learning researcher at Princeton, machine learning is promoted as a technique that academics can pick up and utilize on their own in a matter of hours. But you wouldn’t anticipate that a scientist could learn how to manage a lab through an online course, the man argues. According to Kapoor, who co-authored a preprint on the “crisis,” few scientists are aware that the issues they face when using artificial intelligence (AI) algorithms are also present in other domains. He claims that because peer reviewers do not have the time to carefully examine these models, academia currently lacks systems to identify works that are not replicable. Guidelines were developed by Kapoor and his co-author Arvind Narayanan for scientists to avoid similar mistakes, including an explicit checklist to submit with each paper. 

What is reproducibility?

The concept of reproducibility provided by Kapoor and Narayanan is broad. Computational reproducibility, which is currently a worry for machine-learning specialists, states that other teams should be able to duplicate the results of a model given the full details on data, code, and circumstances. A model is also deemed irreproducible, according to the duo, when data analysis mistakes are made by researchers and the model is not as accurate as advertised.

Such mistakes are difficult to evaluate objectively and frequently need in-depth expertise in the industry to which machine learning is being applied. Some authors whose work the team has criticized argue that their publications include errors or contend that Kapoor’s assertions are exaggerated. For instance, in the field of social studies, scholars have created machine-learning models that are intended to forecast when a nation is most likely to descend into civil war. Once mistakes are fixed, according to Kapoor and Narayanan, these models don’t perform any better than conventional statistical methods. The Georgia Institute of Technology’s David Muchlinski, a political scientist whose paper2 was studied by the two, claims that the area of conflict prediction has been unfairly disparaged and that subsequent investigations support his findings.

Nevertheless, the team’s anthem has been well received. To generate and spread solutions, more than 1,200 individuals have registered for what was ostensibly a short online session on reproducibility on July 28 hosted by Kapoor and colleagues. Each industry will keep running across similar issues, he asserts, “until we do something like this.

When algorithms are used in fields like health and justice, overconfidence in the capabilities of machine-learning models might be harmful, warns Momin Malik, a data scientist at the Mayo Clinic in Rochester, Minnesota, who will be speaking at the event. He warns that machine learning’s reputation may suffer if the situation is not resolved. “I’m a little shocked that machine learning’s credibility hasn’t already crashed. However, I believe it may happen very soon.

Problems with machine learning

Similar difficulties, according to Kapoor and Narayanan, arise when machine learning is applied to several fields. The researchers discovered 329 research publications whose findings could not be properly replicated due to issues with how machine learning was applied after analyzing 20 studies in 17 different research fields. Narayanan is not exempt either; one of the 329 is a 2015 study on computer security that he co-authored. This community as a whole has to confront the issue together, according to Kapoor.

He continues by saying that no particular researcher is to blame for the failures. Instead, inadequate checks and balances combined with AI hype are to blame. Data leaking, which occurs when information from the data set a model learns on contains data that it is subsequently assessed on, is the main problem that Kapoor and Narayanan draw attention to. If these are not fully distinct, then the model has already seen the solutions and its predictions appear to be far more accurate than they are. Researchers should be on the lookout for eight key forms of data leaking that the team has discovered.

Some data leaks are undetectable. Temporal leakage, for instance, is an issue since the future depends on the past and training data sometimes contains points from later than test data. Malik cites a 2011 paper4 that said that a model analyzing the emotions of Twitter users could forecast the closing value of the stock market with an accuracy of 87.6 percent. However, he claims that the algorithm had essentially been given access to the future because the researchers had evaluated the model’s prognostication using data from a period earlier than portions of its training set.

According to Malik, there are bigger problems, such as training models on datasets that are smaller than the population they are eventually supposed to reflect. For instance, an AI that detects pneumonia in chest X-rays but was trained only on elderly people may perform less well on younger people. According to Jessica Hullman, a computer scientist at Northwestern University in Evanston, Illinois, who will present at the workshop, another issue is that algorithms frequently wind up depending on shortcuts that don’t always work. When presented with an image of an animal on a mountain or beach, for instance, a computer vision algorithm can fail since it has learned to detect a cow by the grassy background present in most cow photographs.

Repairing data leaks

Researchers should give proof that their models don’t contain each of the eight categories of leaking with their papers, according to Kapoor and Narayanan’s proposed solution to the problem of data leakage. The authors offer what they refer to as “model info” sheets as a template for this kind of material.

According to Xiao Liu, a clinical ophthalmologist at the University of Birmingham in the UK and co-creator of reporting criteria for research including AI, such as those used in screening or diagnosis, biomedicine has advanced significantly with a similar strategy during the past three years. Liu and her coworkers discovered in 2019 that just 5% of more than 20,000 studies employing AI for medical imaging were sufficiently defined to determine if they would function in a clinical setting. Although guidelines don’t immediately enhance anyone’s models, she claims that they do provide regulators with a resource by “making it pretty evident who the individuals who’ve done it well, and maybe others who haven’t done it well, are”.

Malik believes that collaboration may be advantageous. He suggests including specialists in the relevant discipline as well as those with knowledge of machine learning, statistics, and survey sampling in investigations. The method, according to Kapoor, is likely to have a substantial influence on fields like drug development where machine learning identifies leads for more study. However, other areas will need additional research to demonstrate its benefits, he continues. He counsels academics to avoid the kind of confidence crisis that followed the replication problem in psychology a decade ago, even though machine learning is still in its infancy in many fields. “The problem will only become worse the longer we put it off”, he says.

Source: analyticsinsight.net

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
AI

IIIT Delhi To Conduct Advanced Research In Artificial Intelligence, Machine Learning, Fellowship Worth Rs 10 Lakh On Offer

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!