Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins Google AI’s V-Moe, A New Architecture For AI And Computer Vision - Solutions
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home Solutions

Google AI’s V-Moe, A New Architecture For AI And Computer Vision

February 4, 2022
AI

Over the years, Deep Learning in AI has contributed to outstanding results in a wide array of tasks including image classification and machine translation. The models based on Deep Learning require little interference from humans while performing tasks. Thus it has solved challenges concerning time consumed, accuracy and other reflective as well as cognitive challenges. However, large sources of computations are required to crack huge models and datasets. And to derive strong generalisation without compromising on its robustness, large model sizes are inevitable. Henceforth, providing accurate training for huge models with limited resource requirements has become essential. Conditional computing which adopts an input-oriented strategy by activating different model elements for inputs rather than assigning an entire network for one piece of information is one method to solve the problems with large datasets.

In this scenario, Google AI has come up with a new vision architecture model which functions on the sparse mix of experts, called Vision Mixture of Experts (V-MoE). This model which has been used to form the largest vision model in the world is uploaded to ImageNet and is displayed to show state-of-art accuracy. The major highlight of this model is that it can create wonders with around 50% fewer resources. They have also open-sourced the code to train the model and has provided several models which are pre-trained.

How does V-MoEs work?

Vision works are mostly architecture by Vision Transformers (ViT) after its emergence. It spilt a particular image into various patches, embed linearly, adds position embedding and the resulting consequence is added to the transformer encoder. These are called Tokens. A learnable router layer assigns each token an expert and how they should be weighted. Different tokens of the same image could not be sent to different experts. A maximum of K out of E experts is routed to one token, where the scale of K and E is predetermined. Thus constant computations are maintained per token while the size of the model is scaled.

While experimenting, the google team pre-trained the model on JFT-300M, which contains a large number of images. The models were then transferred into downstream tasks such as ImageNet by using the last layer of the model. For transferring they used two setups: 

  • Fine-tuning the entire model on available examples
  • Pre-trained the network and tuning only the new head. 

In both cases, the developed model outperformed its dense counterpart or achieve a similar result much faster. To test the limits of the vision model, they trained a 15-billion parameter model with 24 MoE layers. This huge layer tested on an extended version of the JFT-300M achieved 90.35% test accuracy in ImageNet. According to Google, it was the largest to date in vision, as far as they knew.

Due to some constraints in the hardware, the models used pre-defined buffer capacity for each expert. If the tokens are assigned beyond the determined capacity, they are dropped as they can not be processed once it becomes “full”. To overcome this constraint, the models are trained to sort tokens according to their importance score and less important tokens are dropped. This has led to the result of high quality and more efficient prediction by the model.

Final Thoughts

There is much yet to be discovered in the V-MoE space. According to Google researchers, with the development of V-MoE, the large-scale conditional computing is just at its beginning stage. This would be a big step in the development of computer vision works. Alongside V-MoE, they have also developed BPR, which requires the model to develop only the important token. These sparse models can help in data-rich platforms such as large scale video modelling. 

Source: indiaai.gov.in

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

MeitY
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

May 31, 2024
Android
Solutions

Android Devices With Faster And More Intelligent Performance Than IPhones

May 18, 2024
Google
Solutions

Google Unveils AI Capable Of Predicting The Behavior Of Human Molecules, Accelerating The Search For New Drugs

May 17, 2024
MeitY
Solutions

Introduction Of Thermal Camera Technology And Product Booklet For Intelligent Transportation Systems (ITS) To Industry

May 3, 2024
Load More
Next Post
BMC

BMC Aims For ‘Net Zero’ Via E-Vehicles, Climate Action Cell, Energy Efficiency & Urban Greenery

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!