Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins Fairness In AI Systems Ensures A Sustainable Future, Tech Players Step In - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

Fairness In AI Systems Ensures A Sustainable Future, Tech Players Step In

February 5, 2022
AI

Last year, Meta AI released the Casual Conversations dataset – an open-sourced data set consisting of 45,186 videos of participants having nonscripted conversations. The purpose was to help AI researchers find meaningful signals that can help them assess the fairness of their computer vision and audio models across gender, age, apparent skin tone, and ambient lighting subgroups.

Taking a step forward, now the team has released human transcripts from the Casual Conversations Dataset to encourage the community to come up with a variety of strategies for reducing statistical biases such as word error rate (WER) in Automated Speech Recognition (ASR) systems. 

Till now, only a few research have looked at how well speech recognition models function for diverse individuals. The addition of human speech transcriptions will open the gate for researchers to utilise the shared dataset for measuring the performance gaps of the speech recognition systems across different groups of people. 

“AI systems need data and benchmarks in order to measure performance, and the research community simply hasn’t had adequate ways to assess fairness concerns for speech recognition systems,” as mentioned in the blog. 

Fairness beyond speech technology

It’s a well-known fact it was Joy Buolamwini’s research that helped persuade big tech giants IBM, Amazon and Microsoft to put a hold on their facial recognition technology. She is currently the founder of The Algorithmic Justice League (AJL) – an organisation working towards mitigating AI bias and harm.  Earlier, biases against women in Apple’s credit card algorithm came into the limelight when Apple’s co-founder Steve Wozniak was given a credit limit 10 times higher than his wife, despite both sharing all assets and accounts. 

It would not be an exaggeration to note that the fairness of AI algorithms is a burgeoning subject of study that stems from the requirement for choices to be free of bias and discrimination. Fairness also applies to AI-based decision-making tools, with the European White Paper on AI providing a framework for AI or algorithmic decision-making to be carefully considered.

Let us use a hypothetical case for the sake of simplicity: an AI model deployed at a bank is used to predict the loan eligibility of an individual and the prediction is based on the risk of default. Some of the notable points in the White Paper on AI by the European Union consider: a person should not be subjected to automated decision making in the first place, it is their right to have an explanation as to how the model comes to a conclusion, and lastly non-discrimination. 

To be precise, this simply calls for the production of AI models that are fair (unbias), interpretable (explainable to end-users) and transparent – by design. Leading by example, several tech players have in-house research teams, products, and tools in this direction. Take, for example:

  • Meta AI uses Fairness Flow, a diagnostic tool that enables their teams to analyse how some types of AI models and labels perform across different groups. 
  • IBM’s AI Fairness 360 (AIF360) is a comprehensive open-source toolkit of metrics for detecting and mitigating undesired bias in datasets and machine learning models, as well as cutting-edge techniques to counteract such bias.
  • Google’s What-If Tool explores a models’ performance on a dataset, is working on ‘Responsible AI with Google Cloud,’ ‘Responsible AI with TensorFlow,’ and laid out general best practices for AI.

“Algorithms will continue to reflect and reinforce preconceptions that hold society and business back unless a determined effort is made”

Unreliable findings from biassed AI models have the potential to harm reputation, severely impact end-users and will lead to a scenario where people’s trust in AI models will be lost. 

Although the bulk of biases emerges during the training of AI models, many unintentional biases emerge over time, necessitating developers to monitor their AI systems in real-time. It is also vital to test models in a real-world setting in order for them to perform better in the environments in which they are designed to perform. 

Source: indiaai.gov.in

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
AI

MeitY, Intel Holds Knowledge Session on Deep Learning

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!