Researchers Use Machine Learning to Fight COVID-19 Disinformation

Researchers Use Machine Learning to Fight COVID-19 Disinformation

Disinformation on COVID-19 spreads almost faster than the disease.

To ensure Americans can find the most accurate information, College of Computing researchers are creating machine-learning (ML) and data science tools to help fact-checkers be more efficient.

The Disinformation Dilemma

Although having high-quality news is important any time, the ever-changing nature of COVID-19 makes it even more vital that users have access to vetted information. Many Americans receive their news from social media, where rumors can be shared as much as memes.

“Rumors, hoaxes, fake cures, bioweapon claims, and disinformation campaigns about COVID-19 are prevalent on social media,” said School of Computational Science and Engineering Assistant Professor Srijan Kumar. “These induce anger, anxiety, and stress in readers, and in many cases, have even led to fatalities, such as hydroxychloroquine overdose.”

Newsroom fact-checkers are at forefront of fighting against false information, but manually verifying every fact is time-consuming at best and nearly impossible in the age of COVID-19. So Kumar and School of Computer Science Professor Mustque Ahamad are building data-driven, secure solutions for fact checking.

A Next-Generation Solution

Kumar and Ahamad are a well-matched team. In his past cybersecurity research, Ahamad has worked with professional fact-checkers to determine what they need to complete their work at news organizations. Kumar, for his part, has been building ML and data-driven tools to detect disinformation. COVID-19 seemed like a natural pairing for the two.

“Together, we started collaborating to build the next generation of data-driven and security-minded solutions for effective fact checking,” Kumar said.

Their solution is to do early detection of disinformation before it even gets to the fact-checkers. With this in mind, they plan to develop ML techniques to remove deliberately misleading information from the news.

Their ML models will be able to learn the difference between true versus false information with only a few training data points.

“Our models will triage the cases that are most likely to be false in order of their impact on the readers,” Kumar said.

The models will also be customizable to the individual fact-checker’s topical, geographical, and language preferences. As the project develops, Kumar and Ahamad will collaborate with professional fact-checkers to ensure the models are effective throughout the research.

“Our framework will bring together a one-stop-shop for group of fact checkers to collaboratively identify false information,” Kumar said. “This information can then be shipped to appropriate stakeholders, so that the readers can be appropriately alerted when they view it and the hoaxes can be removed from social media circulation.”

For more coverage of Georgia Tech’s response to the coronavirus pandemic, please visit our Responding to COVID-19 page.

News Contact

Tess Malone, Communications Officer

tess.malone@cc.gatech.edu