Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins MEITY Advisory: India's AI Regulation: A False Start Or Dawn? - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

MEITY Advisory: India’s AI Regulation: A False Start Or Dawn?

April 10, 2024
artificial-intelligence

On March 15, 2024, the Ministry of Electronics and Information Technology (MEITY) released an advise about the use and implementation of artificial intelligence tools. This advice replaces an earlier one from March 1, 2024, about the same topic.

In the Previous Advisory, MEITY ordered intermediaries1, who were neglecting their duties to exercise due diligence, to implement technical interventions to identify and track the existence of such types of information on platforms in order to check the increasing instances of deep fakes and misinformation that were endangering users and electoral integrity.

The new advisory has broadened the range of criteria for intermediaries’ due diligence, adding compliance standards related to the usage and implementation of AI tools.

It should be mentioned that no changes have been made to the IT Rules as of yet; these requirements have just been given as recommendations. It is unclear when or if the laws, regulations, or IT Act will be changed to guarantee that the standards and guidelines outlined in the Advisory are aligned.

The following are among the additional diligence standards stipulated by the Advisory for intermediaries to make sure of:

  1. Users may not use AI models, Generative AI, big language models, software, or algorithms (collectively, “AI Models”) to host, display, publish, or transmit any content that is prohibited by the IT Rules2 or that otherwise violates any provision of the IT Act.

According to the Advisory, any content created or generated by AI models will be subject to the general guidelines for content moderation under the IT Rules and should be implemented accordingly. The content must also adhere to the aforementioned requirements.

As a result, intermediaries (and AI developers in turn) are required to make sure that any information produced conforms with the content limits outlined in the IT Rules, particularly in situations involving AI Models.

  1. The integrity of the election process is not threatened by bias or discrimination when computer resources or AI models are used.

As to the Advisory, it is imperative for intermediaries and AI developers to guarantee that AI Models do not introduce bias or discrimination, nor pose a threat to the voting process’s legitimacy. Similarities exist between this and the OECD Principle on “Human-Centered Values and Fairness,” which states that AI systems must not be created or reinforced by bias.

However, the Advisory does not define a “threat to the integrity of the electoral process,” nor does it offer more information on the requirements’ thresholds or the accountability and duty of AI developers in the event that they are violated.

  1. The user must be expressly informed of the potential inherent fallibility or unreliability of the output produced by under-tested or unreliable AI models before using or deploying them. The availability of these models must be determined by a consent pop-up or comparable mechanism.

The Previous Advisory’s necessity to get prior government clearance has been eliminated by the Advisory, which only maintains the requirement to explicitly tell users of any potential fallibilities or unreliability of AI models and their outputs. Developers of such AI models are relieved that the obligation to get previous official approval has been removed, as it presents a practical regulatory approach based on accountability, transparency, and disclosure.

  1. Users are made aware of the penalties for interacting with illegal content on their platforms, which include having their access or use rights suspended or terminated and facing legal action under “applicable law.”

Platforms and intermediaries must warn users about the repercussions of handling illegal content in their User Agreements and Terms of Use. Many intermediaries comply with the requirement of periodic user intimation that is provided under the current IT Rules. We anticipate that the intermediaries will notify their users of the potential legal repercussions in accordance with the advise.

  1. Every type of information that might be a deepfake or misinformation needs to have a permanent unique metadata or identifier applied to it. Additionally, this permanent unique metadata or identifier needs to be able to identify the source of the information across all platforms. Furthermore, this distinct metadata should be set up to allow for the identification of any changes made by the user in the event that they alter or modify the information.

All artificial intelligence (AI)-generated content must be distinguished from user content, even if both must pass identical content moderation standards. This means that any artificial production, generation, or modification of text, audio, visual, audio-visual, and other information must be embedded.

The first originator of information4 provision, which permits the Government to issue directives for identifying information originators and was previously restricted to significant social media intermediaries under the IT Rules, is interwoven with the inclusion of this requirement related to permanent metadata. On the other hand, under the Advisory, the requirement is now applicable to platforms and intermediaries in general. Permanent “labels” must enable the identification of the content as “synthetic,” the user or computer resource that generates the information, the intermediary that generates the software information, and the original source of the information.

  1. Violations of the IT Act and its implementing regulations could result in fines and legal action being taken against the platform, intermediary, and its users.

Although it is obvious that failure to comply with the IT Rules (and due diligence requirements) could expose “intermediaries” to liability related to content created by third parties, it is unclear how platforms—which are not regarded as intermediaries—and users—aside from such intermediaries—would be held accountable for breaking the IT Rules.

The Advisory mandates that all intermediaries guarantee adherence to the Advisory with immediate effect, that is, starting on March 15, 2024, and without any further obligations to file or submit any Action Taken Cum Status updates to the MEITY.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
Artificial-Intelligence

With The Launch Of AI-R Privacy By Fasoo, Data Security Is Elevated Above And Beyond Traditional Data Discovery Techniques

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!