Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins US Rivals, According To Microsoft, Are Starting To Employ Generative AI In Offensive Cyber Operations - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

US Rivals, According To Microsoft, Are Starting To Employ Generative AI In Offensive Cyber Operations

February 16, 2024
microsoft

On Wednesday, Microsoft announced that it had found and stopped instances of U.S. adversaries—mostly North Korea, Iran, Russia and China—using or attempting to take advantage of generative artificial intelligence that the company and its business partner had developed to conduct or study offensive cyber operations.

The Redmond, Washington-based business said in a blog post that the methods it and its partner OpenAI saw represent a rising threat and were neither “particularly novel nor unique.” However, the blog does shed light on how geopolitical adversaries of the United States have been utilizing large-language models to increase their capacity to penetrate networks and carry out influence operations.

Even if these were “early-stage, incremental moves,” Microsoft claimed that the “attacks” discovered all involved large-language models that the partners owned and that it was crucial to make them publicly available.

Machine learning has long been utilized by cybersecurity companies in defense, mostly to identify unusual network behavior. However, it’s also used by offensive hackers and criminals, and the game of cat and mouse has been stepped up with the advent of large-language models, spearheaded by OpenAI’s ChatGPT.

Microsoft has made billion-dollar investments in OpenAI, and the company released a paper on Wednesday indicating that generative AI is anticipated to improve malevolent social engineering, resulting in more advanced deepfakes and voice cloning. A threat to democracy that is already present, amplifying misinformation in a year when elections are being held in over 50 countries.

These are a few instances that Microsoft offered. It said that all generative AI assets and accounts of the specified organizations had been disabled in each case:

—The models have been utilized by the North Korean cyberespionage group Kimsuky to investigate international think tanks that conduct research on the nation and to provide material that may be utilized in spear-phishing hacking operations.

—The Revolutionary Guard of Iran has employed large-language models to help with social engineering, software bug fixes, and even researching ways that hackers could avoid being discovered in a compromised network. This involves sending out phishing emails, “one of which purports to be from an international development agency and another of which aims to entice well-known feminists to visit a feminism website created by the attacker.” The production of emails is accelerated and enhanced by AI.

— Using the models, the Russian GRU military intelligence organization known as Fancy Bear has investigated radar and satellite technology that could be connected to the conflict in Ukraine.

— The models have been in contact with the Chinese cyberespionage outfit Aquatic Panda, which targets a wide range of sectors, universities, and governments from France to Malaysia “in ways that suggest a limited exploration of how LLMs can augment their technical operations.”

— According to interactions with large-language models, the Chinese organization Maverick Panda, which has been focusing on U.S. defense contractors among other industries for more than ten years, was assessing their suitability as a source of information “on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”

OpenAI said in a different blog post on Wednesday that the methods found were in line with earlier evaluations that concluded that the chatbot it currently uses, the GPT-4 model, offers “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”

“There are two epoch-defining threats and challenges,” Jen Easterly, the head of the U.S. Cybersecurity and Infrastructure Security Agency, stated to Congress in April of last year. Artificial intelligence is the other, and China is the first. At the time, Easterly stated that the United States needed to make sure AI was developed with security in mind.

The public release of ChatGPT in November 2022, as well as later releases by rivals like Google and Meta, have drawn criticism for being recklessly hurried, given that security was essentially an afterthought throughout development.

CEO of cybersecurity company Tenable Amit Yoran stated, “Of course bad actors are using large-language models—that decision was made when Pandora’s Box was opened.”

It would be more responsible for Microsoft to concentrate on making large-language models more secure rather than creating and marketing solutions to remedy flaws in them, as some cybersecurity experts have complained about.

“Why not develop more secure black-box LLM foundation models rather than marketing countermeasures for an issue they are contributing to?” questioned Berryville Institute of Machine Learning co-founder Gary McGraw, a seasoned expert in computer security.

While the use of AI and large-language models may not seem like a threat right away, according to NYU professor and former AT&T Chief Security Officer Edward Amoroso, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.”

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
Artificial-Intelligence

A Malay Researcher Has Been Selected For An Award At The International AI Summit

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!