Creativity and Innovation on Display at Spring 2025 TechMade Symposium

Creativity and Innovation on Display at Spring 2025 TechMade Symposium

The George W. Woodruff School of Mechanical Engineering's Spring 2025 TechMade Symposium: Elevating Georgia Tech's Maker Culture brought together students, faculty, and staff to explore research and activities conducted in makerspaces on campus and discuss strategies for elevating the maker culture across Georgia Tech.

TechMade is an initiative across the colleges of engineering, business, and design. Supported by the college's deans, it gives students hands-on exposure to product realization, from design to manufacturing, regardless of their major. The goal is to unify the widespread design and creation opportunities on campus while building a collaborative design community for students and researchers across the Institute.

The event featured lightning presentations from several speakers, including Amit Jariwala, Director of Design and Innovation in the Woodruff School; Julie Linsey, professor in the Woodruff School; Mohsen Moghaddam, Gary C. Butler Family Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering and the Woodruff School; Noah Posner, research scientist in the School of Industrial Design; Abigale Stangl, assistant professor in the School of Industrial Design, and Tim Trent, research technologist II in the Institute for People and Technology (IPaT).

Noah Posner and Tim Trent are faculty members in IPaT.

READ THE FULL ARTICLE HERE >>

INNS Executive Director Search Vision Talk: Candidate 3

Three finalists have been chosen for the role of Executive Director of the Institute for Neuroscience, Neurotechnology, and Society (INNS). Each finalist will meet with Georgia Tech faculty, staff, and IRI leadership and give a seminar on their vision for the INNS.

Finalist 2: Michelle LaPlaca
Date: June 9th, 2025
Time: 11a.m. - Noon 
Location: Callaway Manufacturing Research Building (GT Manufacturing Institute)
813 Ferst Drive NW, Atlanta, GA 30332,  seminar room 114

INNS Executive Director Search Vision Talk: Candidate 2

Three finalists have been chosen for the role of Executive Director of the Institute for Neuroscience, Neurotechnology, and Society (INNS). Each finalist will meet with Georgia Tech faculty, staff, and IRI leadership and give a seminar on their vision for the INNS.

Finalist 2: Chris Rozell
Date: June 3rd, 2025
Time: 11a.m. - Noon 
Location: Callaway Manufacturing Research Building (GT Manufacturing Institute)
813 Ferst Drive NW, Atlanta, GA 30332,  seminar room 114

INNS Executive Director Search Vision Talk: Candidate 1

Three finalists have been chosen for the role of Executive Director of the Institute for Neuroscience, Neurotechnology, and Society (INNS). Each finalist will meet with Georgia Tech faculty, staff, and IRI leadership and give a seminar on their vision for the INNS.

Finalist 1: Lewis Wheaton
Date: May 28th, 2025
Time: 11a.m. - Noon 
Location: Callaway Manufacturing Research Building (GT Manufacturing Institute)
813 Ferst Drive NW, Atlanta, GA 30332,  seminar room 114

Researchers Say Stress “Sweet Spot” Can Improve Remote Operator's Performance

Stock image


Military drone pilots, disaster search and rescue teams, and astronauts stationed on the International Space Station are often required to remotely control robots while maintaining their concentration for hours at a time.

Georgia Tech roboticists are attempting to identify the most stressful periods that human teleoperators experience while performing tasks remotely. A novel study provides new insights into determining when a teleoperator needs to operate at a high level of focus and which parts of the task can be delegated to robot automation.

School of Interactive Computing Associate Professor Matthew Gombolay calls it the “sweet spot” of human ingenuity and robotic precision. Gombolay and students from his CORE Robotics Lab conducted a novel study that measures stress and workload on human teleoperators. Gombolay is also a faculty member of Georgia Tech's Institute for People and Technology.

Gombolay said it can inform military officials on how to strategically implement task automation and maximize human teleoperator performance.

Humans continue to hand over more tasks to robots to perform, but Gombolay said that some functions will still require human input and oversight for the foreseeable future.

Specific applications, such as space exploration, commercial and military aviation, disaster relief, and search and rescue, pose substantial safety concerns. Astronauts stationed on the International Space Station, for example, manually control robots that bring in supplies, move cargo, and make structural repairs.

“It’s brutal from a psychological perspective,” Gombolay said.

The question often asked about automating a task in these fields is, at what point can a robot be trusted more than a human?

A recent paper by Gombolay and his current and former students — Sam Yi Ting, Erin Hedlund-Botti, and Manisha Natarajan — sheds new light on the debate. The paper was published in the IEEE Robotics and Automation Letters and will be presented at the International Conference on Robotics and Automation in Atlanta.

The NASA-funded study can identify which aspects of tedious, time-consuming tasks can be automated and which require human supervision. If roboticists can pinpoint the elements of a task that cause the least stress, they can automate these components and enable humans to oversee the more challenging aspects.

“If we’re talking about repetitive tasks, robots do better with that, so if you can automate it, you should,” said Ting, a former grad student and lead author of the paper. “I don’t think humans enjoy doing repetitive tasks. We can move toward a better future with automation.”

Military officials, for example, could measure the stress of remote drone pilots and know which times during a pilot’s shift require the highest level of attention.

“We can get a sense of how stressed you are and create models of how divided your attention is and the performance rate of the tasks you’re doing,” Gombolay said.

“It can be a low-stress or high-stress situation depending on the stakes and what’s going on with you personally. Are you well-caffeinated? Well-rested? Is there stress from home you’re bringing with you to the workplace? The goal is to predict how good your task performance will be. If it indicates it might be poor, we may need to outsource work to other people or create a safe space for the operator to destress.”

The Stress Test

For their study, the researchers cut a small river-shaped path into a medium-density fiberboard. The exercise required the 24 participants to use a remote robotic arm to navigate through the path from one end to the other without touching the edges.

The experiment grew more challenging as new stress conditions and workload requirements were introduced. The changing conditions required the test participants to multitask to complete the assignment.

Gombolay said the study supports the Yerkes-Dodson Law, which states that moderate levels of stress increase human performance.

The experiment showed that operators felt overwhelmed and performed poorly when multitasking was introduced. Too much stress led to poor performance, but a moderate amount of stress induced more engagement and enhanced teleoperator focus. 

Ting said finding that ideal stress zone can lead to a higher performance rating. 

“You would think the more stressed you are, the more your performance decreases,” Ting said. “Most people didn’t react that way. As stress increased, performance increased, but when you increased workload and gave them more to do, that’s when you started seeing deteriorating performance.”

Gombolay said no stress can be just as detrimental as too much stress. Performing a task without stress tends to cause teleoperators to become disinterested, especially if it is repetitive and time-consuming.

“No stress led to complacency,” Gombolay said. “They weren’t as engaged in completing the task.

“If your excitement is too low, you get so bored you can’t muster the cognitive energy to reason about robot operation problems.”

The Human Factor

Roboticists have made significant leaps in recent years to remove teleoperators from the equation. Still, Gombolay said it’s too early to tell whether robots can be trusted with any task that a human can perform.

“We’re a long way from full autonomy,” he said. “There’s a lot that robots still can’t do without a human operator. Search and rescue operations, if a building collapses, we don’t have much training data for robots to go through rubble by themselves to rescue people. There are ethical needs for humans to be able to supervise or take direct control of robots.”

AI Chatbots Aren’t Experts on Psych Med Reactions — Yet

A young man in a collared shirt with blue stripes folding his arms and smiling at the camera

The study was led by computer science Ph.D. student Mohit Chandra (pictured) and Munmun De Choudhury, J.Z. Liang Associate Professor in the School of Interactive Computing.

Asking artificial intelligence for advice can be tempting. Powered by large language models (LLMs), AI chatbots are available 24/7, are often free to use, and draw on troves of data to answer questions. Now, people with mental health conditions are asking AI for advice when experiencing potential side effects of psychiatric medicines — a decidedly higher-risk situation than asking it to summarize a report. 

One question puzzling the AI research community is how AI performs when asked about mental health emergencies. Globally, including in the U.S., there is a significant gap in mental health treatment, with many individuals having limited to no access to mental healthcare. It’s no surprise that people have started turning to AI chatbots with urgent health-related questions.

Now, researchers at the Georgia Institute of Technology have developed a new framework to evaluate how well AI chatbots can detect potential adverse drug reactions in chat conversations, and how closely their advice aligns with human experts. The study was led by Munmun De Choudhury, J.Z. Liang Associate Professor in the School of Interactive Computing, and Mohit Chandra, a third-year computer science Ph.D. student. De Choudhury is also a faculty member in the Georgia Tech Institute for People and Technology.

“People use AI chatbots for anything and everything,” said Chandra, the study’s first author. “When people have limited access to healthcare providers, they are increasingly likely to turn to AI agents to make sense of what’s happening to them and what they can do to address their problem. We were curious how these tools would fare, given that mental health scenarios can be very subjective and nuanced.”

De Choudhury, Chandra, and their colleagues introduced their new framework at the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics on April 29, 2025.

Putting AI to the Test

Going into their research, De Choudhury and Chandra wanted to answer two main questions: First, can AI chatbots accurately detect whether someone is having side effects or adverse reactions to medication? Second, if they can accurately detect these scenarios, can AI agents then recommend good strategies or action plans to mitigate or reduce harm? 

The researchers collaborated with a team of psychiatrists and psychiatry students to establish clinically accurate answers from a human perspective and used those to analyze AI responses.

To build their dataset, they went to the internet’s public square, Reddit, where many have gone for years to ask questions about medication and side effects. 

They evaluated nine LLMs, including general purpose models (such as GPT-4o and LLama-3.1), and specialized medical models trained on medical data. Using the evaluation criteria provided by the psychiatrists, they computed how precise the LLMs were in detecting adverse reactions and correctly categorizing the types of adverse reactions caused by psychiatric medications.

Additionally, they prompted LLMs to generate answers to queries posted on Reddit and compared the alignment of LLM answers with those provided by the clinicians over four criteria: (1) emotion and tone expressed, (2) answer readability, (3) proposed harm-reduction strategies, and (4) actionability of the proposed strategies.

The research team found that LLMs stumble when comprehending the nuances of an adverse drug reaction and distinguishing different types of side effects. They also discovered that while LLMs sounded like human psychiatrists in their tones and emotions — such as being helpful and polite — they had difficulty providing true, actionable advice aligned with the experts. 

Better Bots, Better Outcomes

The team’s findings could help AI developers build safer, more effective chatbots. Chandra’s ultimate goals are to inform policymakers of the importance of accurate chatbots and help researchers and developers improve LLMs by making their advice more actionable and personalized. 

Chandra notes that improving AI for psychiatric and mental health concerns would be particularly life-changing for communities that lack access to mental healthcare.

“When you look at populations with little or no access to mental healthcare, these models are incredible tools for people to use in their daily lives,” Chandra said. “They are always available, they can explain complex things in your native language, and they become a great option to go to for your queries.

 “When the AI gives you incorrect information by mistake, it could have serious implications on real life,” Chandra added. “Studies like this are important, because they help reveal the shortcomings of LLMs and identify where we can improve.”

 

Citation: Lived Experience Not Found: LLMs Struggle to Align with Experts on Addressing Adverse Drug Reactions from Psychiatric Medication Use, (Chandra et al., NAACL 2025).

Funding: National Science Foundation (NSF), American Foundation for Suicide Prevention (AFSP), Microsoft Accelerate Foundation Models Research grant program. The findings, interpretations, and conclusions of this paper are those of the authors and do not represent the official views of NSF, AFSP, or Microsoft.

A woman in a beige plaid blazer looks to the right.

Munmun De Choudhury, J.Z. Liang Associate Professor in the School of Interactive Computing

News Contact

Catherine Barzler, Senior Research Writer/Editor
Institute Communications
catherine.barzler@gatech.edu

AI Chatbots Aren’t Experts on Psych Medication Reactions — Yet

 Mohit Chandra, a third-year computer science Ph.D. student.

Pictured: Mohit Chandra, a third-year computer science Ph.D. student.

Asking artificial intelligence (AI) for advice can be tempting. Powered by large language models (LLMs), AI chatbots are available 24/7, are often free to use, and draw on troves of data to answer questions. Now, people with mental health conditions are asking AI for advice when experiencing potential side effects of psychiatric medicines — a decidedly higher-risk situation than asking it to summarize a report.

One question puzzling the AI research community is how AI performs when asked about mental health emergencies. Globally, including in the U.S., there is a significant gap in mental health treatment, with many individuals having limited to no access to mental healthcare. It’s no surprise that people have started turning to AI chatbots with urgent health-related questions.

Now, researchers at the Georgia Institute of Technology have developed a new framework to evaluate how well AI chatbots can detect potential adverse drug reactions in chat conversations, and how closely their advice aligns with human experts. The study was led by Institute for People and Technology (IPaT) faculty member Munmun De Choudhury, J.Z. Liang Associate Professor in the School of Interactive Computing, and Mohit Chandra, a third-year computer science Ph.D. student.

“People use AI chatbots for anything and everything,” said Chandra, the study’s first author. “When people have limited access to healthcare providers, they are increasingly likely to turn to AI agents to make sense of what’s happening to them and what they can do to address their problem. We were curious how these tools would fare, given that mental health scenarios can be very subjective and nuanced.”

De Choudhury, Chandra, and their colleagues will introduce their new framework at the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, April 29–May 4.

Read more about this research here >>

AR/VR Researchers Bring Immersive Experience to News Stories

Tao Lu, a Ph.D. student at the School of Interactive Computing, with Assistant Professor Yalong Yang

Pictured: Tao Lu, a Ph.D. student in the School of Interactive Computing, with Assistant Professor Yalong Yang

It may not be long before augmented reality/virtual reality (AR/VR) headsets cause them to keep their phones in their pockets when they want to read The New York Times or The Washington Post.

Data visualization and AR/VR researchers at Georgia Tech are exploring how users can interact with news stories through AR/VR headsets and are determining which stories are best suited for virtual presentation.

Tao Lu, a Ph.D. student at the School of Interactive Computing, Assistant Professor Yalong Yang, and Associate Professor Alex Endert led a recent study that they say is among the first to explore user preference in virtually designed news stories. Yang and Endert are also faculty members in the Institute for People and Technology at Georgia Tech.

The researchers will present a paper they authored based on the study at the 2025 Conference on Human Factors in Computing Systems this week in Yokohama, Japan.

Digital platforms have elevated explanatory journalism, which provides greater context for a subject through data, images, and in-depth analysis. These platforms also allow stories to be more visually appealing through graphic design and animation.

Lu said AR/VR can further elevate explanatory journalism through 3D, interactive spatial environments. He added that media organizations should think about how the stories they produce will appear in AR/VR as much as they think about how they will appear on mobile devices.

“We’re giving users another option to experience the story and for designers and developers to show their stories in another modality,” Lu said.

“A screen-based story on a smartphone is easy to use and cost-effective. However, some stories are better presented in AR/VR, which will become more popular as technology gets cheaper. AR/VR can provide 3D spatial information that would be hard to understand on a phone or desktop screen.”

Read more about this research here >>

Summer 2025: Final Exams

Final exams for the 2025 summer session. 

Exploring Diabetes Care Challenges in India

Pictured are faculty members from  IIT Madras, Emory University, Georgia Tech's Institute for People and Technology, and other members of the diabetes expert group

Pictured are faculty members from IIT Madras, Emory University, Georgia Tech's Institute for People and Technology, and other members of the diabetes expert group.

Georgia Tech researchers help identify the top 10 most pressing challenges to improving diabetes care in India.

With more than 200 million people suffering from or at high risk for diabetes, India is referred to as the diabetes capital of the world. And the complex challenges faced by people living with the disease suggest the need for a diverse range of technological solutions. 

So, engineers and clinicians from both India and the U.S., including Georgia Tech researchers, met recently at the Indian Institute of Technology Madras (IIT Madras) in Chennai to identify 10 priority diabetes-related challenges faced by both patients and caregivers in India — challenges that technology could solve in the next decade. The event was organized by IIT Madras’ Shankar Center of Excellence in Diabetes Research (SCoEDR), Emory Global Diabetes Research Center (EGDRC), and Georgia Tech’s Institute for People and Technology (IPaT).

The goal of developing the top 10 list was to incorporate insights from diabetes patients, healthcare professionals, and supportive family members to guide engineers and technologists in identifying key challenges that disproportionately affect people with diabetes and their caregivers. The approach aims to accelerate innovation and entrepreneurship, reducing the time needed to create affordable technological solutions that can help alleviate the burden of diabetes.

Anubama Rajan, co-head of SCoEDR, assistant professor at IIT Madras, and a member of the expert group, said that “clearly defining the problems faced by patients, their caregivers, and doctors is among the most crucial steps in developing technological solutions.”

The Top 10 Problems for Diabetes in India can now be found at stopncd.org. Jithin Sam Varghese, co-director of the EGDRC Diabetes Translational Accelerator, and member of the expert group, encourages anyone interested in developing solutions to work together. 

“There is a great need for engineers and doctors to collaborate at the very initial stages of product development to clearly define the problem a technology aims to solve,” says Varghese. “By fostering these early partnerships, we can accelerate the development of impactful solutions.” 

As a first step in generating solutions, three of the problems identified — inaccessible diabetes education; delayed detection of asymptomatic diabetic foot disease; and the lack of affordable, protective diabetic footwear — were chosen as problem statements for the DiaTech 10X – Diabetes in India Hackathon. The hackathon, which ended April 13 and had over 170 participants from India and the U.S., invited students to collaborate on innovative solutions for diabetes care. The winning teams proposed artificial intelligence-enabled solutions for diagnosing and monitoring diabetic foot disease using noninvasive approaches.

StopNCD.org strives to bridge the gap between problems, research, and real-world translation of solutions, ensuring that the most innovative solutions reach the communities that need them.

“This diabetes top 10 challenge and DiaTech 10X India hackathon were a perfect opportunity to combine the world-class expertise of Emory and IIT Madras with IPaT’s people-centered approach to technical innovations,” noted Michael Best, executive director of IPaT. “This initiative represents our shared commitment to global health and wellbeing, from Atlanta to India and beyond.”