Accompaniment, Design, and Research
Carl DiSalvo, Professor, School of Interactive Computing, Georgia Tech
Winter Break: Campus Closed
The Georgia Tech campus is closed for winter break.
The Algorithm Will See You Now — But Only If You’re the Perfect Patient
Sep 02, 2025 —
An illustration representing a doctor working with an AI-powered health device.
In the morning, before you even open your eyes, your wearable device has already checked your vitals. By the time you brush your teeth, it has scanned your sleep patterns, flagged a slight irregularity, and adjusted your health plan. As you take your first sip of coffee, it’s already predicted your risks for the week ahead.
Georgia Tech researchers warn that this version of AI healthcare imagines a patient who is "affluent, able-bodied, tech-savvy, and always available." Those who don’t fit that mold, they argue, risk becoming invisible in the healthcare system.
The Ideal Future
In their study, published in the Proceedings of the ACM Conference on Human Factors in Computing Systems, the researchers analyzed 21 AI-driven health tools, ranging from fertility apps and wearable devices to diagnostic platforms and chatbots. They used sociological theory to understand the vision of the future these tools promote — and the patients they leave out.
“These systems envision care that is seamless, automatic, and always on,” said Catherine Wieczorek, a Ph.D. student in human-centered computing in the School of Interactive Computing and lead author of the study. “But they also flatten the messy realities of illness, disability, and socioeconomic complexity.”
Four Futures, One Narrow Lens
During their analysis, the researchers discovered four recurring narratives in AI-powered healthcare:
- Care that never sleeps. Devices track your heart rate, glucose levels, and fertility signals — all in real time. You are always being watched, because that’s framed as “care.”
- Efficiency as empathy. AI is faster, more objective, and more accurate. Unlike humans, it doesn’t get tired or biased. This pitch downplays the value of human judgment and connection.
- Prevention as perfection. A world where illness is avoided through early detection if you have the right sensors, the right app, and the right lifestyle.
- The optimized body. You’re not just healthy, you’re high-performing. The tech isn’t just treating you; it’s upgrading you.
“It’s like healthcare is becoming a productivity tool,” Wieczorek said. “You’re not just a patient anymore. You’re a project.”
Not Just a Tool, But a Teammate
This study also points to a critical transformation in which AI is no longer just a diagnostic tool; it’s a decision-maker. Described by the researchers as “both an agent and a gatekeeper,” AI now plays an active role in how care is delivered.
In some cases, AI systems are even named and personified, like Chloe, an IVF decision-support tool. “Chloe equips clinicians with the power of AI to work better and faster,” its promotional materials state. By framing AI this way — as a collaborator rather than just software — these systems subtly redefine who, or what, gets to be treated.
“When you give AI names, personalities, or decision-making roles, you’re doing more than programming. You’re shifting accountability and agency. That has consequences,” said Shaowen Bardzell, chair of Georgia Tech’s School of Interactive Computing and co-author of the study.
“It blurs the boundaries,” Wieczorek noted. “When AI takes on these roles, it’s reshaping how decisions are made and who holds authority in care.”
Calculated Care
Many AI tools promise early detection, hyper-efficiency, and optimized outcomes. But the study found that these systems risk sidelining patients with chronic illness, disabilities, or complex medical needs — the very people who rely most on healthcare.
“These technologies are selling worldviews,” Wieczorek explained. “They’re quietly defining who healthcare is for, and who it isn’t.”
By prioritizing predictive algorithms and automation, AI can strip away the context and humanity that real-world care requires.
“Algorithms don’t see nuance. It’s difficult for a model to understand how a patient might be juggling multiple diagnoses or understand what it means to manage illness, while also navigating other important concerns like financial insecurity or caregiving. They are predetermined inputs and outputs,” Wieczorek said. “While these systems claim to streamline care, they are also encoding assumptions about who matters and how care should work. And when those assumptions go unchallenged, the most vulnerable patients are often the ones left out.”
AI for ALL
The researchers argue that future AI systems must be developed in collaboration with those who don’t fit in the vision of a “perfect patient.”
“Innovation without ethics risks reinforcing existing inequalities. It’s about better tech and better outcomes for real people,” Bardzell said. “We’re not anti-innovation. But technological progress isn’t just about what we can do. It’s about what we should do — and for whom.”
Wieczorek and Bardzell aren’t trying to stop AI from entering healthcare. They’re asking AI developers to understand who they’re really serving.
Funding:
This work was supported by the National Science Foundation (Grant #2418059).
Michelle Azriel, Sr. Writer-Editor
mazriel3@gatech.edu
Georgia Tech’s Jill Watson Outperforms ChatGPT in Real Classrooms
Sep 02, 2025 —
A new version of Georgia Tech’s virtual teaching assistant, Jill Watson, has demonstrated that artificial intelligence can significantly improve the online classroom experience. Developed by the Design Intelligence Laboratory (DILab) and the U.S. National Science Foundation AI Institute for Adult Learning and Online Education (AI-ALOE), the latest version of Jill Watson integrates OpenAI’s ChatGPT and is outperforming OpenAI’s own assistant in real-world educational settings.
Jill Watson not only answers student questions with high accuracy. It also improves teaching presence and correlates with better academic performance. Researchers believe this is the first documented instance of a chatbot enhancing teaching presence in online learning for adult students.
How Jill Watson Shaped Intelligent Teaching Assistants
First introduced in 2016 using IBM’s Watson platform, Jill Watson was the first AI-powered teaching assistant deployed in real classes. It began by responding to student questions on discussion forums like Piazza using course syllabi and a curated knowledge base of past Q&As. Widely covered by major media outlets including The Chronicle of Higher Education, The Wall Street Journal, and The New York Times, the original Jill pioneered new territory in AI-supported learning.
Subsequent iterations addressed early biases in the training data and transitioned to more flexible platforms like Google’s BERT in 2019, allowing Jill to work across learning management systems such as EdStem and Canvas. With the rise of generative AI, the latest version now uses ChatGPT to engage in extended, context-rich dialogue with students using information drawn directly from courseware, textbooks, video transcripts, and more.
Future of Personalized, AI-Powered Learning
Designed around the Community of Inquiry (CoI) framework, Jill Watson aims to enhance “teaching presence,” one of three key factors in effective online learning, alongside cognitive and social presence. Teaching presence includes both the design of course materials and facilitation of instruction. Jill supports this by providing accurate, personalized answers while reinforcing the structure and goals of the course.
The system architecture includes a preprocessed knowledge base, a MongoDB-powered memory for storing conversation history, and a pipeline that classifies questions, retrieves contextually relevant content, and moderates responses. Jill is built to avoid generating harmful content and only responds when sufficient verified course material is available.
Field-Tested in Georgia and Beyond
In Fall 2023, Jill Watson was deployed in Georgia Tech’s Online Master of Science in Computer Science (OMSCS) artificial intelligence course, serving over 600 students, and in an English course at Wiregrass Georgia Technical College (WGTC), part of the Technical College System of Georgia (TCSG).
A controlled A/B experiment in the OMSCS course allowed researchers to compare outcomes between students with and without access to Jill Watson, even though all students could use ChatGPT. The findings are striking:
- Jill Watson’s accuracy on synthetic test sets ranged from 75% to 97%, depending on the content source. It consistently outperformed OpenAI’s Assistant, which scored around 30%.
- Students with access to Jill Watson showed stronger perceptions of teaching presence, particularly in course design and organization, as well as higher social presence.
- Academic performance also improved slightly: students with Jill saw more A grades (66% vs. 62%) and fewer C grades (3% vs. 7%).
A Smarter, Safer Chatbot
While Jill Watson uses ChatGPT for natural language generation, it restricts outputs to validated course material and verifies each response using textual entailment. According to a study by Taneja et al. (2024), Jill not only delivers more accurate answers than OpenAI’s Assistant but also avoids producing confusing or harmful content at significantly lower rates.
Compared to OpenAI’s Assistant, Jill Watson (ChatGPT) not only achieves higher accuracy but also produces confusing or harmful content at significantly lower rates. Jill Watson answers correctly 78.7% of the time, with only 2.7% of its errors categorized as harmful and 54.0% as confusing. In contrast, OpenAI’s Assistant demonstrates a much lower accuracy of 30.7%, with harmful failures occurring 14.4% of the time and confusing failures rising to 69.2%. Additionally, Jill Watson has a lower retrieval failure rate of 43.2%, compared to 68.3% for the OpenAI Assistant.
What’s Next for Jill
The team plans to expand testing across introductory computing courses at Georgia Tech and technical colleges. They also aim to explore Jill Watson’s potential to improve cognitive presence, particularly critical thinking and concept application. Although quantitative results for cognitive presence are still inconclusive, anecdotal feedback from students has been positive. One OMSCS student wrote:
“The Jill Watson upgrade is a leap forward. With persistent prompting I managed to coax it from explicit knowledge to tacit knowledge. Kudos to the team!”
The researchers also expect Jill to reduce instructional workload by handling routine questions and enabling more focus on complex student needs.
Additionally, AI-ALOE is collaborating with the publishing company John Wiley & Sons, Inc., to develop a Jill Watson virtual teaching assistant for one of their courses, with the instructor and university chosen by Wiley. If successful, this initiative could potentially scale to hundreds or even thousands of classes across the country and around the world, transforming the way students interact with course content and receive support.
A Georgia Tech-Led Collaboration
The Jill Watson project is supported by Georgia Tech, the US National Science Foundation’s AI-ALOE Institute (Grants #2112523 and #2247790), and the Bill & Melinda Gates Foundation.
Core team members are Saptrishi Basu, Jihou Chen, Jake Finnegan, Isaac Lo, JunSoo Park, Ahamad Shapiro and Karan Taneja, under the direction of professor Ashok Goel and Sandeep Kakar. The team works under Beyond Question LLC, an AI-based educational technology startup.
Breon Martin
Building Intelligent Systems to Detect Cardiopulmonary Emergencies
SPEAKER: Jake Sunshine, MD, Associate Professor, University of Washington School of Medicine; Research Scientist at Google
IPaT and GTRI Seed Funding Awarded to Four Projects
Aug 29, 2025 — Atlanta, GA
The Institute for People and Technology at Georgia Tech (IPaT) and the Georgia Tech Research Institute (GTRI) co-sponsored more than $55,000 in seed grant awards to four research projects. These 2025-2026 IPaT/GTRI newly awarded grants provide seed funding for new research collaborations or provide support for new forms of internal and external research community engagement and collaboration.
Congratulations to these four winning project teams:
1) Proposal title: Building a Research to Impact Collaborative on AI and Global Health
Research overview: Research and practice at the intersection of AI and global health has grown rapidly in the last few years, yet most of these efforts are fragmented and disconnected. There is a pressing need for spaces that facilitate knowledge-sharing and resource coordination in this space. We are thus launching a global, interdisciplinary Research to Impact Collaborative (RIC) on AI and global health that will: 1) support knowledge-sharing across research and practice, 2) facilitate student learning, and 3) accelerate cross-sector collaborations. To catalyze the RIC, we will conduct a year-long virtual seminar series and in-person workshops that will bring together researchers, practitioners, and students. This initiative will position Georgia Tech as a leader in AI and global health, build a lasting collaborative, and lay the foundation for interdisciplinary collaborations and future funding.
Team members: Naveena Karusala, Neha Kumar, and Munmun De Choudhury at the School of Interactive Computing; Kai Wang at the School of Computational Science and Engineering; Gari Clifford at the Department of Biomedical Engineering. Additional members: Azra Ismail (Emory University), Anupriya Tuli and Madeline Balaam (KTH), Pushpendra Singh (IIIT-Delhi), Melissa Densmore (University of Cape Town), Naomi Yamashita (Kyoto University), Neha Madhiwalla (ARMMAN), Shirley Yan and Anubhav Arora (Noora Health)
2) Proposal title: Project: Are Data Centers the New Landfills?
Research overview: Data centers are growing rapidly in the U.S., with nowhere more notable than in Georgia, particularly in the Atlanta metropolitan region (Berger, 2025). This expansion continues as policymakers and the data center industry position data centers as a source of innovation in artificial intelligence (AI), national security, and economic growth brought by the financial returns of data centers. Data center energy use has nearly tripled in the last decade to a total of 4.4% of electricity use in the US and may triple again over the next decade (Shehabi et al., 2024). This growth is driven by increasing demands for data-intensive technologies and applications, like AI, and a data center-friendly policy climate in Georgia (see Georgia HB1291). Like landfills, data centers are often sited in ways that impose local external costs, impacting important aspects of everyday life, such as water security, energy prices, taxes, jobs, housing, and air quality. In Georgia, a proposed data center consumes approximately 6 million gallons of water per day, a volume equivalent to filling nine Olympic-sized swimming pools (Mecke, 2025). Furthermore, the tax revenue that Georgia generates from data centers is estimated to be far less than the cost of incentives provided to the industry (e.g. subsidies for equipment), resulting in a negative state fiscal impact of $18 million in 2021 (Hardee et al., 2022). This proposed IPAT Research Grant investigates the trade-offs in constructing data centers, weighing the economic benefits against their external impacts on local Atlanta communities. In doing so, we aim to develop the next generation of responsible and ethical data centers that aim to inform and empower communities exposed to the externalities imposed by data centers. Scholars of data centers argue that community experiences of data centers rarely feature alongside the dominant promises of data centers such as economic growth and technological innovation (Zander 2024). Highlighting these alternative experiences, we will suggest policy and data tools to better site, deploy, and discuss how data centers are built, maintained, and shape the lives of their neighbors.
Team members: Cindy Lin and Josiah Hester, School of Interactive Computing; Allen Hyde, School of History and Sociology; Joe Bozeman III, School of Civil Engineering; Elora Raymond, School of City and Regional Planning; Anthony Harding, School of Public Policy and Jung Ho Lewe, School of Aerospace Engineering.
3) Proposal title: The Sound of Motion: Transforming Artistic Body Movement into Music for Motor Therapy Investigators
Research overview: This research proposal aims to initiate a new collaborative project across the Colleges of Sciences, Computing, and Liberal Arts to start designing and developing a novel platform that enables augmented artistic expression exercise through body movements as instruments. When a person moves their trunk, legs, arms, or a handheld object (e.g., a Wizarding wand), the platform will transform their movement trajectories into the associated sounds of musical instruments (i.e., sonification). Turning the movement trajectories into sounds will enable people with motor disabilities (e.g., Parkinson’s disease; stroke) to express their artistry with their less-impaired body parts. Additionally, developing augmented artistic exercises as a new rehabilitation paradigm may stimulate previously untapped neuromotor strategies and facilitate motor recovery. Furthermore, the quality of artistic movement can be objectively assessed through this platform. Experts in human motor control (Shinohara), sonification and human-AI interaction (Walker), and human-computer interaction in the performing arts (Trajkova) will combine their complementary expertise to design and develop such a multimodal system, demonstrating proof of concept. This interdisciplinary R&D will benefit older adults and individuals with motor impairments by enhancing their well-being by introducing new, enjoyable, engaging, and rewarding artistic expressions or exercises. Such activities can enhance the release of neurotransmitters that facilitate neural plasticity (e.g., dopamine), ultimately leading to improved motor function.
Team members: Minoru Shinohara, College of Sciences; Bruce Walker, College of Computing; Milka Trajkova, Ivan Allen College of Liberal Arts; Joshua Posen, College of Engineering.
4) Proposal title: Generating Space-making Companion Robot Behaviors through Large Language Models (LLMs) for Morally Ambiguous Situations.
Research overview: Increasingly operating in public spaces and urban life, robots can be easily caught in such morally ambiguous situations, which are often dynamic, complex, and unpredictable, presenting novel factors and agencies that can quickly exceed the scope of any projected (or pre-programmed) human-robot interaction. LLMs are well-suited to interpreting specific scenarios and producing logically coherent responses, which makes them ideal for contexts where pre-programming robot behavior is impractical. In this project, we investigate whether and how LLMs can generate appropriate behaviors for a space-making robot reading companion in morally ambiguous situations.
Team members: Yixiao Wang, School of Industrial Design; Tyler Cook, Carter School of Public Policy; Shreyas C Kousik, School of Mechanical Engineering.
Walter Rich
When AI Blurs Reality: The Rise of Hyperreal Digital Culture
Aug 28, 2025 —
Bigfoot vlogs are an example of AI-generated content that has gained attention for its use of hyperrealistic storytelling and digital personas in online media.
From Bigfoot vlogs to algorithmically created personas, hyperrealistic AI content is redefining the boundaries of digital creators. These influencers are entirely virtual personas created using generative AI tools that simulate human features, voices, and behaviors. They post lifestyle content, interact with followers, and even secure brand endorsements — all without existing in the physical world. As these technologies grow more widely available and their results more believable, specialists caution that we are moving into a new age where the line separating fiction from reality is becoming increasingly blurred.
The Rise of Synthetic Creativity
Experts at Georgia Tech say the surge in AI hyperrealism — content that mimics human emotion, speech, and appearance with uncanny precision — is both a technological marvel and a societal challenge.
“AI does not have emotions as we understand them in humans, but it knows how to mimic emotional speech,” said Mark Riedl, professor in the School of Interactive Computing. “Once we understand that AI is mimicking us, it is easy to understand how they can create believable outputs that sound authentic.”
Riedl points to the democratization of video creation as a major shift. “AI video generation tools and the ability to bypass traditional content channels and post directly to social media have opened up the floodgates,” he said.
Recent examples include synthetic influencers such as Nobody Sausage, a digitally animated character that has attracted over 30 million followers across multiple social media platforms through short-form dance videos and brand collaborations. On platforms like Character.AI, users engage with millions of virtual personas designed to simulate conversation and personality traits. These AI-generated figures are reshaping how audiences interact with content, marketing, and identity across Instagram, TikTok, and other social media channels.
Mental Health and the Reality Gap
Munmun De Choudhury, professor in the School of Interactive Computing, warns that hyperreal AI content can distort users’ perception of reality, especially among vulnerable populations.
“This distortion can fuel anxiety, exacerbate body image and self-comparison issues, and contribute to a broader erosion of epistemic trust — our basic belief in what others present as true,” she said.
Her research shows that social media already blurs the line between authentic self-expression and performative identity. Hyperreal AI content — from deepfakes to emotionally resonant synthetic personas — further complicates users’ ability to evaluate what is real or trustworthy. Adolescents and those facing mental health challenges may be especially susceptible.
“Individuals experiencing stress or social isolation may be more prone to believe deepfakes,” De Choudhury explained. “Such content often reinforces existing beliefs or fills gaps in social connection.”
The AI content challenges our understanding of authenticity, trust, and digital identity. It also raises questions about consent, misinformation, and the psychological effects of interacting with synthetic personas. Gen Z users, she notes, often judge AI content by emotional resonance rather than factual accuracy, while older users may struggle to detect synthetic cues altogether.
Platforms, Persuasion, and Misinformation
Riedl emphasizes that AI storytelling tools can be used to sway public opinion through “narrative transportation,” a psychological phenomenon in which audiences become immersed in a story and are less likely to question its truth.
“Storytelling is a means of persuasive communication,” he said. “Our brains are attuned to stories in a way that can bypass critical thinking.”
Recent incidents highlight the changing landscape. Deepfakes of public figures such as Taylor Swift and Tom Hanks have surged in 2025, with over 179 incidents in the first four months of the year alone — surpassing all of 2024. These deepfakes range from humorous impersonations to fraudulent and explicit content, raising ethical and legal concerns about identity misuse and misinformation. Riedl notes that video misinformation has historically been harder to produce but is now easier and more likely to be tailored to niche audiences.
Social media companies face mounting pressure to take action. De Choudhury argues that labeling AI-generated content is necessary but insufficient. “Platforms must invest in user-centered design, digital literacy interventions, and transparency about how algorithms surface such content,” she said.
The stakes are especially high in mental health communities, where authenticity and lived experience are critical. “Users often feel overwhelmed or deceived when they encounter synthetic content without clear cues of its artificial origin,” she added.
Governance in a Globalized AI Era
Milton Mueller, professor in the Jimmy and Rosalynn Carter School of Public Policy, argues that regulation may be ineffective or even counterproductive in a decentralized digital ecosystem.
“Generative AI is part of a globalized and distributed digital ecosystem,” Mueller said. “So, which regulatory authority are you talking about, and how does it gain the leverage needed to control the outputs?”
While the EU’s AI Act mandates labeling and imposes steep fines, U.S. efforts remain fragmented. The Federal Communications Commission has made AI-generated voices in robocalls illegal, with entities facing fines, and several states are pushing for watermarking and criminal penalties for political deepfakes. But experts warn that First Amendment protections complicate enforcement.
Mueller cautions that governments are already using AI as a geopolitical tool, which could undermine global cooperation and lead to strategic escalation. “Instead of freely trading data and establishing common rules, governments are asserting digital sovereignty,” he said.
He advocates for addressing AI-generated misinformation through decentralized governance, public debate, and media literacy, rather than centralized regulation or automated controls, emphasizing that content moderation should be guided by open processes and existing legal remedies applied after the fact.
As AI-generated content becomes more sophisticated and widespread, researchers say the challenge lies not only in technological safeguards but in how society adapts. Experts at Georgia Tech emphasize the need for transparency, interdisciplinary collaboration, and public engagement. The future of hyperreal media, they say, will depend on how well platforms, policymakers, and users navigate its risks and possibilities.
Georgia Tech Plugged Him In. Now He’s Wired for Problem-Solving
Aug 28, 2025 — Atlanta, GA
Scott Gilliland, senior research scientist at Georgia Tech’s Institute for People and Technology
Scott Gilliland’s winding path led to breakthroughs in wearable tech that solve challenges for people with Parkinson’s and help us understand dolphin communication.
A research team in the Atlantic Ocean listens to dolphins, testing technology that may one day decode their communication system. Thousands of miles away, a Parkinson’s patient may speak more clearly, thanks to a device that helps them overcome speech challenges caused by the condition. One sounds like science fiction; the other is a transformative medical breakthrough. Yet both are rooted in the same field of research: ubiquitous computing.
Scott Gilliland, a senior research scientist at Georgia Tech’s Institute for People and Technology (IPaT), has played a key role in developing these technologies. IPaT connects researchers across disciplines to turn innovative ideas into practical applications. It’s a natural fit for Gilliland, whose work blends human-centered design with embedded systems, which are small computers built into everyday devices to perform specific tasks.
As a researcher, he often partners with colleagues in the College of Computing, where he also earned his bachelor’s and master’s degrees. His work in ubiquitous computing and wearable systems is quietly reshaping how we interact with the world.
“Ubiquitous computing” refers to technology that is embedded in everyday objects and environments — for example, clothing. It makes computing power accessible without being intrusive. Gilliland’s projects span different fields of study that aim for the same goal: real-world benefit through innovative, human-centered technology.
Exploring the Impacts of Environment of Care (EoC) on Nurses' Hand Hygiene Compliance
Speaker: Hui Cai, Professor, School of Architecture, Georgia Tech; Executive Director, SimTigrate Design Center
Sept. 11, 2025
12:00 p.m. Lunch; 12:30 p.m. talk starts
Location: Hodges Room, 3rd floor, Centergy One building in Technology Square
What Can Get Lost Within User Experience
Speaker: