Family Loss Brings About Medical Breakthrough

Hong Yeo

Hong Yeo shows off the latest version of his wearable sleep monitoring device.

The call from his mom is still vivid 20 years later. Moments this big and this devastating can define lives, and for Hong Yeo, today a Georgia Tech mechanical engineer, this call certainly did. Yeo was a 21-year-old in college studying car design when his mom called to tell him his father had died in his sleep. A heart attack claimed the life of the 49-year-old high school English teacher who had no history of heart trouble and no signs of his growing health threat. For the family, it was a crushing blow that altered each of their paths.

“It was an uncertain time for all of us,” said Yeo. “This loss changed my focus.”

For Yeo, thoughts and dreams of designing cars for Hyundai in Korea turned instead toward medicine. The shock of his father going from no signs of illness to gone forever developed into a quest for medical answers that might keep other families from experiencing the pain and loss his family did — or at least making it less likely to happen.  

Yeo’s own research and schooling in college pointed out a big problem when it comes to issues with sleep and how our bodies’ systems perform — data. He became determined to invent a way to give medical doctors better information that would allow them to spot a problem like his father’s before it became life-threatening.

His answer: a type of wearable sleep data system. Now very close to being commercially available, Yeo’s device comes after years of working on the materials and electronics for an easy-to-wear, comfortable mask that can gather data about sleep over multiple days or even weeks, allowing doctors to catch sporadic heart problems or other issues. Different from some of the bulky devices with straps and cords currently available for at-home heart monitoring, it offers the bonuses of ease of use and comfort, ensuring little to no alteration to users’ bedtime routine or wear. This means researchers can collect data from sleep patterns that are as close to normal sleep as possible.  

“Most of the time now, gathering sleep data means the patient must come to a lab or hospital for sleep monitoring. Of course, it’s less comfortable than home, and the devices patients must wear make it even less so. Also, the process is expensive, so it’s rare to get multiple nights of data,” says Audrey Duarte, University of Texas human memory researcher.  

Duarte has been working with Yeo on this system for more than 10 years. She says there are so many mental and physical health outcomes tied to sleep that good, long-term data has the potential to have tremendous impact.

“The results we’ve seen are incredibly encouraging, related to many things —from heart issues to areas I study more closely like memory and Alzheimer’s,” said Duarte.

Yeo’s device may not have caught the arrhythmia that caused his father’s heart attack, but nights or weeks of data would have made effective medical intervention much more likely.  

Inspired by his own family’s loss, Yeo’s life’s work has become a tool of hope for others.  

News Contact

Blair.Meeks@gatech.edu

Georgia CTSA Maternal Health Webinar


Carrie Cwiak, MD, MPH
Professor
Gynecology & Obstetrics and Epidemiology
Emory University

REGISTER HERE

Join us for a webinar to stimulate dialogue around challenges and opportunities in recruiting pregnant patients for research in the current Georgia legislative climate. 

Chatbots Are Poor Multilingual Healthcare Consultants, Study Finds

The Web Conference 2024

Georgia Tech researchers say non-English speakers shouldn’t rely on chatbots like ChatGPT to provide valuable healthcare advice. 

A team of researchers from the College of Computing at Georgia Tech has developed a framework for assessing the capabilities of large language models (LLMs).

Ph.D. students Mohit Chandra and Yiqiao (Ahren) Jin are the co-lead authors of the paper Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries. 

Their paper’s findings reveal a gap between LLMs and their ability to answer health-related questions. Chandra and Jin point out the limitations of LLMs for users and developers but also highlight their potential. 

Their XLingEval framework cautions non-English speakers from using chatbots as alternatives to doctors for advice. However, models can improve by deepening the data pool with multilingual source material such as their proposed XLingHealth benchmark.     

“For users, our research supports what ChatGPT’s website already states: chatbots make a lot of mistakes, so we should not rely on them for critical decision-making or for information that requires high accuracy,” Jin said.   

“Since we observed this language disparity in their performance, LLM developers should focus on improving accuracy, correctness, consistency, and reliability in other languages,” Jin said. 

Using XLingEval, the researchers found chatbots are less accurate in Spanish, Chinese, and Hindi compared to English. By focusing on correctness, consistency, and verifiability, they discovered: 

  • Correctness decreased by 18% when the same questions were asked in Spanish, Chinese, and Hindi. 
  • Answers in non-English were 29% less consistent than their English counterparts. 
  • Non-English responses were 13% overall less verifiable. 

XLingHealth contains question-answer pairs that chatbots can reference, which the group hopes will spark improvement within LLMs.  

The HealthQA dataset uses specialized healthcare articles from the popular healthcare website Patient. It includes 1,134 health-related question-answer pairs as excerpts from original articles.  

LiveQA is a second dataset containing 246 question-answer pairs constructed from frequently asked questions (FAQs) platforms associated with the U.S. National Institutes of Health (NIH).  

For drug-related questions, the group built a MedicationQA component. This dataset contains 690 questions extracted from anonymous consumer queries submitted to MedlinePlus. The answers are sourced from medical references, such as MedlinePlus and DailyMed.   

In their tests, the researchers asked over 2,000 medical-related questions to ChatGPT-3.5 and MedAlpaca. MedAlpaca is a healthcare question-answer chatbot trained in medical literature. Yet, more than 67% of its responses to non-English questions were irrelevant or contradictory.  

“We see far worse performance in the case of MedAlpaca than ChatGPT,” Chandra said. 

“The majority of the data for MedAlpaca is in English, so it struggled to answer queries in non-English languages. GPT also struggled, but it performed much better than MedAlpaca because it had some sort of training data in other languages.” 

Ph.D. student Gaurav Verma and postdoctoral researcher Yibo Hu co-authored the paper. 

Jin and Verma study under Srijan Kumar, an assistant professor in the School of Computational Science and Engineering, and Hu is a postdoc in Kumar’s lab. Chandra is advised by Munmun De Choudhury, an associate professor in the School of Interactive Computing. 
 
The team will present their paper at The Web Conference, occurring May 13-17 in Singapore. The annual conference focuses on the future direction of the internet. The group’s presentation is a complimentary match, considering the conference's location.  

English and Chinese are the most common languages in Singapore. The group tested Spanish, Chinese, and Hindi because they are the world’s most spoken languages after English. Personal curiosity and background played a part in inspiring the study. 

“ChatGPT was very popular when it launched in 2022, especially for us computer science students who are always exploring new technology,” said Jin. “Non-native English speakers, like Mohit and I, noticed early on that chatbots underperformed in our native languages.” 

School of Interactive Computing communications officer Nathan Deen and School of Computational Science and Engineering communications officer Bryant Wine contributed to this report.

Mohit Chandra and Yiqiao (Ahren) Jin
The Web Conference 2024
News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Nathan Deen, Communications Officer
ndeen6@cc.gatech.edu

Teaching AI to Collaborate, not Merely Create, Through Dance

A Kennesaw State University dance student and the LuminAI-powered avatar dance together.

A Kennesaw State University dance student and the LuminAI-powered avatar dance together.

Two children are playing with a set of toys, each playing alone. That kind of play involves a somewhat limited set of interactions between the child and the toy. But what happens when the two children play together using the same toys?

“The actions are similar, but the choices and outcomes are very different because of the dynamic changes they’re making with the other person,” says Brian Magerko, Regents’ Professor in Georgia Tech’s School of Literature, Media, and Communication. “It’s a thing that humans do all the time, and computers don’t do with us at all.”

Welcome to the next frontier of artificial intelligence (AI) — not just generating but collaborating in real-time.

Magerko and his colleagues, Georgia Tech research scientist Milka Trajkova and Kennesaw State University Associate Professor of Dance Andrea Knowlton, are putting a collaborative AI system they’ve developed to the ultimate test: the world’s first collaborative AI dance performance.

Dance Partner

LuminAI is an interactive system that allows participants to engage in collaborative movement improvisation with an AI virtual dance partner projected on a nearby screen or wall. LuminAI analyzes participant movements and improvises responses informed by memories of past interactions with people. In other words, LuminAI learns how to dance by dancing with us.

The National Science Foundation-supported project began about 12 years ago in a lab and became an art installation and public demo. LuminAI has since moved into a different phase as a creative collaborator and education tool in a dance studio.

“We’re looking at the role LuminAI can play in dance education. As far as we’re aware, this is the first implemented version of an AI dancer in a dance studio,” says Trajkova, who was a professional ballet dancer before becoming a research scientist on the project.

To prepare LuminAI to collaborate with dancers, the research team started by studying pairs of improvisational dancers.

Performers on stage during a Lumina AI performance.

“We’re trying to understand how non-verbal, collaborative creativity occurs,” Knowlton says. “We start by trying to understand influencing factors that are perceived as contributing to improvisational success between two artists. Through that understanding, we applied those criteria to an AI system so it can have a similar experience with co-creative success.”

“We’re working on a creative arc,” adds Trajkova. “So instead of the AI agent just generating movements in response to the last thing that happened, we’re working to track and understand the dynamics of creative ideas across time as a continuous flow, rather than isolated instances of reaction.”

Students from Knowlton’s improvisational dance class at Kennesaw State spent two months of their spring semester working routinely with the LuminAI dancer and recording their impressions and experiences. One of the purposes the team discovered is that LuminAI serves as a third view for dancers and allows them to try ideas out with the system before trying it out with a partner.

The classroom experiment will culminate in a public performance on May 3 at Kennesaw State’s Marietta Dance Theater featuring the students performing with the LuminAI dancer. As far as the research team is aware the event is the world’s first collaborative AI dance performance.

While not all the dancers embraced having an AI collaborator, some of those who were skeptical at first left the experience more open to the possibility of collaborating with AI, Knowlton says. Regardless of their feelings toward working with AI, Knowlton says she believes the dancers gained valuable skills in working with specialized technology, especially as dance performances evolve to include more interactive media.

Refined Movement

So, what’s next for LuminAI? The project represents at least two possible paths for its learnings. The first includes continued exploration about how AI systems can be taught to cooperate and collaborate more like humans.

“With the advent of generative AI these past few years, it’s been really clear how great a need there is for this sort of social cognition,” says Magerko. “One of the things we’re going to be getting off the ground is sense-making with large language models. How do you collaborate with an AI system – rather than just making text or images, they’ll be able to make with us.”

The second involves the body movements LuminAI has been cataloging and analyzing over the years. Dance exemplifies highly refined motor skills, often exhibiting a level of detail surpassing that found in various athletic disciplines or physical therapy. While the tools designed to capture these intricate movements—through cameras and AI—are still nascent, the potential for harnessing this granular data is significant, Trajkova says.

Performers on stage during a Lumina AI performance.

That exploration begins on May 30 with a two-day summit being held at Georgia Tech to discuss its application for transforming performance athletics, with interdisciplinary participants in dance, computer vision, biomechanics, psychology, and human-computer interaction from Georgia Tech, Emory, KSU, Harvard, Royal Ballet in London, and Australian Ballet.

“It’s about understanding AI's role in augmenting training, promoting wellness as well as diving deep in decoding the artistry of human movements. How can we extract insights about the quality of athlete’s movements so we can help develop and enhance their own unique nuances?” Trajkova says.

News Contact

Technology Licensing Lunch and Learn-Intellectual Property Litigation

Intellectual Property Litigation

Join Technology Licensing for a Lunch and Learn Presentation:

Tuesday, May 21, 2024
11am - 1pm
Location-GTAPS Classroom

Presentation given by Puja Lea, Partner, Troutman Pepper. Puja is a seasoned patent litigator and GT grad! 

IPaT Hosts High School Computer Science Teachers

Georgia high school computer science teachers participating in the Georgia Tech Rural Computer Science Initiative

Georgia high school computer science teachers participating in the Georgia Tech Rural Computer Science Initiative

On March 25-26, the Institute for People and Technology (IPaT) hosted the spring gathering of rural Georgia high school computer science teachers participating in a state funded program to help high schoolers learn computer programming.

The Georgia Tech Rural Computer Science Initiative offers co-teaching lessons prepared by Georgia Tech professors. The program offers virtual classes in computer science to help develop career pathways by exposing high school students to critical areas such as coding, cybersecurity, artificial intelligence, sensors, and data visualization. The program is funded by the Georgia General Assembly.

The initiative, launched in 2022, includes 16 school districts, 19 high schools, and has taught 1,329 students. Continued growth of the program is expected in 2024 as the number of districts participating will grow to 24 school districts.

The program is run by Lizanne DeStefano, director of Georgia Tech’s Center for Education Integrating Science, Mathematics and Computing (CEISMC), and Leigh McCook, director with the Georgia Tech Research Institute (GTRI). There are now thirteen Georgia Tech employees supporting the program across CEISMC, GTRI, and IPaT.

The meeting was designed to gather feedback and envision future directions to make the program even more successful.

News Contact

2024 BioE Day

Presentations from the 2023 BioE Award Winners, featured BioE Alum Seminars, and a Rapid Fire Thesis Competition. Lunch served (while supplies last!).

Georgia Tech Microsoft CloudHub Partnership Explores Electric Vehicle Adoption

Omar Asensio is Associate Professor at Georgia Institute of Technology and Climate Fellow, Harvard Business School

Omar Asensio is Associate Professor at Georgia Institute of Technology and Climate Fellow, Harvard Business School

With new vehicle models being developed by major brands and a growing supply chain, the electric vehicle (EV) revolution seems well underway. But, as consumer purchases of EVs have slowed, car makers have backtracked on planned EV manufacturing investments. A major roadblock to wider EV adoption remains the lack of a fully realized charging infrastructure. At just under 51,000 public charging stations nationwide, and sizeable gaps between urban and rural areas, this inconsistency is a major driver of buyer hesitance.

 

How do we understand, at a large scale, ways to make it easier for consumers to have confidence in public infrastructure? That is a major issue holding back electrification for many consumer segments.


- Omar Asensio, Associate Professor at Georgia Institute of Technology and Climate Fellow, Harvard Business School | Director, Data Science & Policy Lab

Omar Asensio, associate professor in the School of Public Policy and director of the Data Science and Policy Lab at the Georgia Institute of Technology, and his team have been working to solve this trust issue using the Microsoft CloudHub partnership resources. Asensio is also currently a visiting fellow with the Institute for the Study of Business in Global Society at the Harvard Business School.

The CloudHub partnership gave the Asensio team access to Microsoft’s Azure OpenAI to sift through vast amounts of data collected from different sources to identify relevant connections. Asensio’s team needed to know if AI could understand purchaser sentiment as negative within a population with an internal lingo outside of the general consumer population. Early results yielded little. The team then used specific example data collected from EV enthusiasts to train the AI for a sentiment classification accuracy that now exceeds that of human experts and data parsed from government-funded surveys.

The use of trained AI promises to expedite industry response to consumer sentiment at a much lower cost than previously possible. “What we’re doing with Azure is a lot more scalable,” Asensio said. “We hit a button, and within five to 10 minutes, we had classified all the U.S. data. Then I had my students look at performance in Europe, with urban and non-urban areas. Most recently, we aggregated evidence of stations across East and Southeast Asia, and we used machine learning to translate the data in 72 detected languages.”

 

We are excited to see how access to compute and AI models is accelerating research and having an impact on important societal issues. Omar's research sheds new light on the gaps in electric vehicle infrastructure and AI enables them to effectively scale their analysis not only in the U.S. but globally.

- Elizabeth Bruce, Director, Technology for Fundamental Rights, Microsoft

Asensio's pioneering work illustrates the interdisciplinary nature of today’s research environment, from machine learning models predicting problems to assisting in improving EV infrastructure. The team is planning on applying the technique to datasets next, to address equity concerns and reduce the number of “charging deserts.” The findings could lead to the creation of policies that help in the adoption of EVs in infrastructure-lacking regions for a true automotive electrification revolution and long-term environmental sustainability in the U.S.

- Christa M. Ernst

Source Paper: Reliability of electric vehicle charging infrastructure: A cross-lingual deep learning approach - ScienceDirect

News Contact

Christa M. Ernst
Research Communications Program Manager
Topic Expertise: Robotics | Data Sciences| Semiconductor Design & Fab
Research @ the Georgia Institute of Technology
christa.ernst@research.gatech.edu

L[ux] Lab Hosts Medical Device Usability Study

Cassidy Wang interacting with a physician who is testing Ethos' needle guidance system.

Cassidy Wang interacting with a physician who is testing Ethos' needle guidance system.

Ethos Medical recently made use of the College of Design’s L[ux] Lab to conduct a usability study of its needle guidance system prototype. Founded by Georgia Tech students (now alumni), Ethos Medical won the 2019 Georgia Tech InVenture Prize for their first-of-its-kind medical device.

Using ultrasound imaging technology coupled with a custom-built guidance tool, they invented a guidance system to help physicians navigate needles into the spine accurately and safely. In 2020, they were awarded a Phase I grant from the National Science Foundation’s Small Business Innovation Research program, followed by a Phase II grant in 2021.

Ethos Medical’s co-founders Cassidy Wang, CEO, and Lucas Muller, CTO, personally oversaw the study held in the Technology Square Research Building lab space, working with physicians from local hospitals to better understand the human factors of their novel device.

The study was designed and moderated by Maureen Carroll and Stephen Jones of Creature, an award-winning industrial design firm based in Atlanta.

“Creature and our engineering partner, Enginuity Works, are working to improve the design, human factors, and usability of the system. By using the L[ux] Lab and bringing in emergency room doctors, we can observe physicians using the system and evaluate how well our system integrates with their work process,” said Carroll, founder of Creature.

Several Georgia Tech students from the SimTigrate Design Lab were also present, gaining hands-on experience with the planning and execution of such a study.

Part of the study’s goals are to assess how emergency room clinicians may adapt their existing workflow for performing lumbar punctures to one that incorporates this new needle guidance system while considering realistic procedural and safety constraints. A second goal is to evaluate the ability of clinicians to accomplish specific tasks that require interaction with the user interfaces of the system and identify interfaces and interactions that they perceive to be unintuitive or difficult to perform.

The L[ux] Lab, part of the SimTigrate Design Lab space, is an interdisciplinary research lab using evidence-based design to improve the medical experience for patients and providers. SimTigrate – combining concepts of simulation and integration – grew out of the Healthy Environments Research Group which involved Georgia Tech and Emory University with the goal of improving healthcare outcomes. The lab is affiliated with the Georgia Tech College of Design and is led by Jennifer DuBose, executive director of the SimTigrate Design Lab and principal research associate in the College of Design.

“We’re fortunate that the L[ux] Lab’s simulated clinical environment is so conducive to medical device usability testing, and we’re grateful for all the support shown by Jennifer and the rest of the folks at SimTigrate,” said Wang, CEO of Ethos Medical. “We’ve already begun making improvements to address the friction points discovered during the clinicians’ hands-on interactions. We’re also seeing that many of these practitioners are excited about the capabilities our device brings to the point of care, both for lumbar punctures and beyond!”

Lucas Muller

Lucas Muller plays the role of patient as a clinician tests the needle testing system as others observe.

News Contact

NSF Award to Launch Study of How Older Adults Interact With Robots

Matthew Gombolay

Matthew Gombolay

With the number of older adults in the U.S. population rising and straining the systems in place to take care of them, Matthew Gombolay sees a solution — robots.

Gombolay received a National Science Foundation (NSF) CAREER Award for research that could make assistive robots the standard of care for older adults. The award is the most prestigious the NSF offers to early-career faculty.

“When people age, they deserve to age with dignity and not just be locked away,” said Gombolay, an assistant professor in Georgia Tech’s School of Interactive Computing. “If you don’t have enough resources or access to home nurses or adult children who have extra time to take care of you, what’s going to happen?”

Gombolay will receive nearly $600,000 to collect the largest data set of its kind on how older adults interact and communicate with assistive robots. Gombolay will then use that data to create algorithms that can be deployed in assistive robots and understand the needs of older adults.

READ MORE >>

News Contact