New Chatbot Can Spot Cyberattacks Before They Start

John McIntyre

From data breaches to widespread systemic shutdowns, cyberattacks like the  2024 Fulton County (Georgia) government attack now occur as regularly as natural disasters — and cause just as much destruction. And, like severe weather, they can be predicted thanks to a new artificial intelligence (AI) tool that analyzes social media to determine who could cause the next big cyberattack. 

Researchers in Georgia Tech’s Scheller College of Business, together with colleagues at the University of the District of Columbia, Washington, D.C. (UDC),  developed a chatbot that analyzed sentiment on popular social media sites like X (formerly known as Twitter) to determine cyberthreats. The chatbot tweeted information to engage Twitter users who either tweeted about news events or holidays, or retweeted cyberattack news. It interacted with 100,000 users over a three-month period. Sentiment analysis — gauging users’ feelings, attitudes, and moods— was performed on human responses to the bot’s tweets.  

Applying sentiment analysis to human-chatbot interactions is not new. Globally, companies use chatbots to determine customers’ reactions to brands and products.  During the Covid-19 pandemic, governments and health organizations employed chatbots to determine public attitudes toward vaccinations, preventive measures, and mask wearing. However, identifying potential cyberthreats via sentiment analysis represents a unique — and complicated — application.

“When you examine sentiment analysis on a chatbot through a cybersecurity lens, you are looking for potential hackers,” said Scheller Professor John McIntyre, who is also the executive director of the Center for International Business Education and Research.  “Catching hackers using sentiment analysis is challenging, but  predictive models can be built to find them. 

“AI can target a particular population to understand its expressions of approval, disapproval, or even intent to harm, attack, or misuse the technology.”

A team led by McIntyre and UDC Associate Professors Amit Arora and Anshu Arora conducted the research. They set out to see if cybersecurity threats could be discovered through social media, but the study is just the beginning of a potentially fertile cyberthreat prevention method. McIntyre believes the study could expand to analyzing sentiment in other languages and even on other platforms. 

“As we move toward a world in which we'll rely more and more on communication technologies and social media, there will be an increasing number of threats,” he said. “We must know how to counter such threats.”

Funded by the Department of Defense’s Applied Research Lab for Intelligence and Security

Arora A, Arora A, McIntyre J. Developing Chatbots for Cyber Security: Assessing Threats through Sentiment Analysis on Social Media. Sustainability. 2023; 15(17):13178. https://doi.org/10.3390/su151713178

News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu