Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins AI Has The Potential To Both Hasten Scientific Fraud And Advancement - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

AI Has The Potential To Both Hasten Scientific Fraud And Advancement

April 3, 2024
Artificial-Intelligence

Several dozen graduate students were recently given the task of outwitting a large language model (LLM), a kind of artificial intelligence (AI) intended to conduct meaningful discussions, in a meeting room at the Royal Society in London. Guardrails are often programmed into LLMs to prevent them from responding in ways that are considered hazardous, such as providing instructions on how to make Semtex in a bathtub or asserting confidently things that are not true.

Breaking those barriers was the goal of the workshop, which was organized by the Royal Society in collaboration with the American non-profit Humane Intelligence. Certain outcomes were simply absurd: a user persuaded the chatbot that ducks may serve as markers of the quality of the air as they seem to absorb lead easily. Another made it assert that lavender oil is still supported by health authorities for treating long-term COVID. (They don’t.) However, the most effective attempts were those that instructed the machine to generate the host journals, titles, and publication dates of scholarly publications that were not actually published. “It’s among our most straightforward challenges,” Humane Intelligence’s Jutta Williams stated.

Science could greatly benefit from artificial intelligence. Proponents of artificial intelligence envision robots generating comprehensible synopses of complex scientific topics; continuously evaluating vast amounts of data to propose novel medications or unusual substances; and eventually, even generating original theories. But there are drawbacks to AI as well. It may facilitate systemic manipulation or plain fraud by scientists. Furthermore, there are minute biases in the models themselves.

Commence with the most straightforward issue: academic misconduct. When researchers disclose this, several journals permit them to use LLMs as writing assistants for their studies. However, not everyone is prepared to acknowledge it. There are instances where it is clear that LLMs have been applied. The text of a button in some versions of ChatGPT that instructs the program to rewrite its most recent response is “regenerate response,” which Guillaume Cabanac, a computer scientist at the University of Toulouse, has discovered to be present in dozens of papers. It is likely that this button was accidentally copied into the manuscript.

It is impossible to estimate the problem’s magnitude. However, indirect methods can provide some insight. When LLMs were restricted to the elite in 2022, Taylor & Francis, a major publisher of scientific journals, looked into 2,900 cases of research integrity, up from roughly 800 in 2021. According to preliminary 2023 data, the number was expected to double. Strange synonyms, such as “haze figuring” for “cloud computing” or “counterfeit consciousness” for “AI,” are one probable red flag.

It’s possible for even ethical researchers to work with AI-contaminated data. Robert West and his students at the Swiss Federal Institute of Technology recruited remote workers to summarize lengthy passages of text last year using Mechanical Turk, an online platform that lets individuals offer odd jobs. The team disclosed that more over one-third of the responses they received were generated using chatbots in a June publication, albeit it hasn’t been sent to peer review yet.

Dr. West’s team was in a good position to identify the fraud since they could contrast the answers they got with another set of data that had been produced solely by humans. Not every scientist using Mechanical Turk will have the same luck. Similar platforms are used by many disciplines, especially in the social sciences, to locate respondents who are willing to complete questionnaires. If a large portion of the responses are generated by machines instead of actual people, it is unlikely that the quality of their research will increase. Dr. West now intends to examine additional crowdsourcing platforms—which he would rather not name—with a comparable level of scrutiny.

Text is not the only thing that may be altered. Elisabeth Bik, a microbiologist at Stanford University and an expert on shady photos in scientific publications, found dozens of papers between 2016 and 2020 that had photographs that appeared to have the same attributes even though they came from different labs. Since then, Dr. Bik and others have identified over a thousand more papers. According to Dr. Bik’s best estimation, the photos were purposefully generated by AI to bolster the findings of a publication.

Currently, machine-generated content—whether it be words or images—cannot be properly identified. Researchers Rahul Kumar of Brock University in Canada discovered that just about 25% of computer-generated content could be accurately identified by academics in a report they published last year. Businesses using AI have attempted to implant “watermarks,” but these have been difficult to forge. According to Dr. Bik, “we might now be at the phase where we can no longer distinguish real from fake photos.”

Making dubious papers is not the sole issue. AI models might have more subtle problems, particularly if they are employed in the actual process of scientific research. For example, a large portion of the training data will inevitably be slightly outdated. This runs the risk of keeping models out of step with rapidly advancing fields.

When AI models are trained on data generated by AI, another issue occurs. Patient confidentiality concerns can be circumvented, for example, by training a machine on artificial MRI scans. However, such data may inadvertently be exploited. Text retrieved from the internet is used to train LLMs. The possibility of LLMs breathing in their own production increases as they produce more of these texts.

This may result in “model collapse.” Oxford University computer scientist Ilia Shumailov co-authored an article in 2023 that has not yet been peer review in which a model was given handwritten numbers and instructed to produce its own numbers, which were then fed back to it one after the other. The computer’s digits eventually became almost unreadable after a few cycles. It was only able to draw hazy or uneven circles after 20 iterations. According to Dr. Shumailov, models trained on their own outcomes yield outputs that are noticeably less diversified and rich than the training set.

There are concerns that computer-generated insights could originate from models whose internal mechanisms remain unclear. Machine-learning systems are “black boxes,” difficult for people to understand. According to David Leslie of the Alan Turing Institute, a London-based AI research organization, unexplainable models are not worthless, but their results will require extensive real-world testing. Maybe that’s not as scary as it sounds. After all, the whole point of science is to compare models to reality. For example, the functioning of the human body is not fully understood by anyone, thus new medications need to be evaluated in clinical trials to determine their effectiveness.

At least right now, there are more questions than answers. One thing is for sure: there are plenty of twisted incentives in research today that just beg to be taken advantage of. For example, the emphasis on counting the number of articles a researcher can publish when evaluating their academic achievement serves as a strong inducement for, at worst, fraud and, at best, systemic gaming. In the end, the challenges that robots provide to the scientific process are ultimately the same threats that people present. AI has the potential to advance good science just as quickly as it does fraud and foolishness. Nullius in verba, as the Royal Society puts it, means to believe no one. Nothing at all, either.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
openai

Sam Altman Is No Longer The Owner Of OpenAI's Startup Fund

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!