Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins Chatbots Can Be Weaponized — How To Defend Against These Attacks - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

Chatbots Can Be Weaponized — How To Defend Against These Attacks

May 20, 2022
chatbots

The interface between humans and machines has significantly improved over the past few years. With advancements in chatbot technology, that interaction has taken on a more natural flow where systems can now discern intentionality and extract nuances that can be more readily turned into actions to perform or the selection of targeted information to return to the user. For businesses, this translates into improved customer service and more efficient use of people resources within the company. These factors have driven the mass adoption of chatbot technology across a wide array of businesses and industries.

However, as we’ve seen countless times before, the speed of chatbot adoption has outpaced the expansion of cybersecurity programs to protect against threats introduced by these technologies. Traditional cybersecurity attacks designed to steal sensitive information and gain persistent access to corporate networks have new life within the added layers provided by chatbot systems.

The Problem

In many installations, a “bolt-on” approach has facilitated the rapid adoption of chatbot technology where chatbot layers are added outside of the traditional compute environments that the company manages (i.e., specialized containers, new cloud computing environments, third-party SaaS delivery). This sets the stage for chatbots to be on the fringe of many security programs but is further complicated by the new surface this technology creates.

In the world of chatbots, natural language processing is dependent on configuration files and memory slots that define and house language components that include intents, entities, rules and storyboard creations. The various patterns that these constructs create are applied to user interaction via a machine learning-based model that allows for greater flexibility in how requests are structured but also has the byproduct of increasing solution complexity and attack surface.

Potential Weaponization

Identifying how these extra layers of complexities may be exploited is key to building strong protective and detective controls to reduce the cyber risk of these emerging technologies before the rise of targeted attacks. Anticipating the coming weaponization of chatbot technology may take many forms, but the exploitation of user interaction to skim information and the use of chatbots as a command-and-control channel stand out as the highest probability for the initial wave of attacks.

As noted earlier, chatbot configuration files completely drive the chatbot-to-customer dialog, making it easy for an attacker with a foothold in the environment to inject new fields into a valid conversation. One possible skimming scenario involves expanding a typical order status lookup dialog beyond requesting the required customer name and order number to also request the credit card detail and ZIP code used to place the order. Skimmed information could be stored in a memory slot within the chatbot, meaning it could reside only in chatbot application memory until an attacker exfiltrates all collected information through the chat channel by supplying predetermined “code words” designed to trigger the dump of collected data. To a typical user, there would be no change to functionality. The fact that collected data is never stored on disk and is exfiltrated using obfuscated data through the chat channel may mean the organization wouldn’t detect it for some time.

In most cases, the content of chatbot-user conversations falls outside the scope of cybersecurity tools designed to detect malicious interaction and data exfiltration. This sets the stage for potential command-and-control capabilities to be delivered through the chat dialog. As in the previous scenario, predefined codeword(s) could be used to trigger remote access trojan (RAT) functionality to access underlying systems, execute arbitrary commands and transfer files through the chat dialog. Chat-based RATs (ChatRATs) are another likely scenario that we will see in coming chatbot-centered cyberattacks.

A Strong Defense

Defending against these and other chatbot-related attacks requires an expansion of the cybersecurity program that adds preventative and detective controls at the environmental and application layers. This expansion would likely translate into extending the boundaries of cybersecurity practices such as security hardening, access management, protective technology, data loss prevention and incident detection to chatbot-hosting environments and services.

Protecting the application layer starts with ensuring all aspects of chatbot development, configuration and maintenance are fully integrated into the organization’s secure software delivery lifecycle (SDLC). This practice should include robust access controls to development environments, secure coding/configuration analysis, detection of malicious code injection, tight change control for production deployment and integrity monitoring of production applications. The integrity of chatbot configuration files is especially important since these files extend past simple service integration and access configurations to include defining all aspects of user interaction and language constructs, becoming primary targets for attackers wanting to hijack or add malicious augmentation to user interactions.

The final piece of a robust chatbot cybersecurity program involves the protection of the core machine learning engine that drives the chatbot. These engines are built by consuming dialog elements established in configuration files and processing against a set of training examples to build a conversational artificial intelligence model. While multiple AI technologies may be used for this process, artificial neural networks (ANN) are by far the most popular and powerful algorithms used to deliver this capability. The downside from a security perspective is that the intelligent processing of the contents of a model file, where knowledge is represented in mathematical connections between simulated neurons, is very challenging. Detection of a model Trojanized through the injection of malicious configuration/training content or model substitution would be very difficult to detect using content inspection. The best preventative and detective control for addressing this risk is through integrity monitoring at a file level linked to a change management system that can validate authorized model changes versus those introduced through nefarious intent.

Chatbots are likely to continue to evolve and revolutionize the way humans and machines interact. On a similar trajectory, the cyber risks and targeting of these technologies will also continue to rise. A proactive approach that instills core cybersecurity practices can help ensure a smooth adoption that avoids significant, costly cybersecurity incidents.

Source: forbes.com

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
ml

How Machine Learning Is Transforming Healthcare In India

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!