Spring 2024 Speakers

gvu lunch lectures
beth kolko

Capitalism for Humans

Apr. 18, 2024
Beth Kolko, Ph.D.
Professor and Associate Department Chair in the Department of Human Centered Design & Engineering at the University of Washington

This is a talk about how to redesign capitalism, focusing on how startups are built, products are made, and customers are reached. I research how design and capitalism are intertwined, and how design perspectives can inform practices of entrepreneurship, commercialization, and tech transfer, practices which ultimately govern what new products and services are created and launched into the world where they have the opportunity to affect how people live their lives.  

Translational work in HCD and HCI often focuses on product development and how to create technologies that effectively meet the needs of the people who adopt them — generally through iterative, research-oriented practices. This talk explores how the frameworks of HCD and HCI can be used more expansively —  to guide not just how to build products, but how to build entire companies that address the needs of individuals and communities while minimizing downstream harms. While discourses of “how to start a startup” courses, whether formal university offerings or Startup Weekend style community events, address products and users in ways that borrow quite a bit from HCI and HCD, this talk will show how design and HCI can drive multiple aspects of startups —  including the development of business models, employee policies, pricing strategy, and more — to help create companies that are less extractive and more beneficial to society at large.  

For this talk, I will provide concrete examples of tools that can help translate social, environmental, ethical, and critical perspectives into technical designs. I will also talk about how business models – when paired with technological innovations – can be used as levers to increase the extent to which ethical considerations inform the foundation of a company. Finally, I will discuss how identifying user needs in a commercial context can be done within a morally and critically informed framework and then translated into technical specifications that guide product development work.  

This work is based on my experience founding a medical device company, my work as a venture capitalist, and my past several years teaching a graduate course on “Building a Human Centered Venture.”   

hwajung hong

Human-AI Interaction in Mental Health

Apr. 11, 2024
Hwajung Hong, Ph.D.
Associate Professor in the Department of Industrial Design at KAIST

As AI advances, so does human intelligence and productivity. Yet, AI use has the potential to reduce users' capacity for deliberate decision-making, thereby diminishing their sense of agency. In this talk, I will discuss how AI's core features—prediction, conversation, and generation—can be employed in the field of mental health, a domain where agency is crucial, to support self-reflection and informed decision-making about health activities. My research team conducted a series of studies on examining the design and impact of AI-driven systems that: 1) utilize stress prediction and explainability to empower users in managing stress; 2) employ conversational AI to assist users in reevaluating cognitive biases; and 3) leverage language generation models for fostering self-reflection via cooperative diary writing. The goal of this talk is to provide insights and recommendations for designers, researchers, and practitioners to design AI technologies that are more sensitive to human concerns and behavior to augment human agency within mental health interventions. 

dietmar offenhuber

Autographic Design – the Matter of Data in a Self-Inscribing World


Co-sponsored by the Center for Interdisciplinary Media Arts

Apr. 4, 2024 (TALK CANCELLED)
Dietmar Offenhuber, Ph.D.
Associate Professor and Chair of Art+Design at Northeastern University

Data analysis and visualization are crucial tools in today’s society, and digital representations have steadily become the default for presenting claims about the state of the world. Yet, more and more often, we find that citizen scientists, environmental activists, and amateur forensic investigators are using analog methods to present evidence of pollution, climate change, and the spread of disinformation.  

In my talk, I will discuss Autographic design as a non-representational framework of visualization based on the notion that data are material entities rather than abstract representations. Focusing on the materiality of data generation, the goal of autographic design is to make the process of data generation legible and accountable. In the institutional politics of whose data is accepted as trustworthy, autographic design reverses representational rules – instead of adopting experts’ methods and representations, it challenges these representations through sensory displays that emphasize traces, imprints, and self-inscriptions.

research seed grants

Research and Engagement Grant Presentations

Mar. 28, 2024

The Georgia Tech Research Institute (GTRI) and the Institute for People and Technology (IPaT) support two separate types of grant proposals. Research Grants provide seed funding for new research collaborations, and Engagement Grants provide support for new forms of internal and external community engagement and collaboration. At this lecture, the five winning projects for 2023-2024 will present.


  • Artificial Intelligence Based Abstract Review Assistant (AIARA)
  • Toward Fairer Diagnosis and Care of Type 2 Diabetes: A Long-Term and Pipeline-Level View
  • ASTRO! - Manysourcing the Design and Behavior of Future Robotic Guide Dogs
  • Data-Driven Platform for Transforming Subjective Assessment into Objective Processes for Artistic Human Performance and Wellness
  • Voice+: Locating the Human Voice in a Technology-Driven World
sumita sharma

Critical AI literacy with children: in pursuit of fair and inclusive technology futures

Mar. 14, 2024
Sumita Sharma, Ph.D.
Postdoc Researcher in HCI, University of Oulu

Children interact with Artificial intelligence (AI) in various direct and indirect ways, yet, there is limited research on the impacts of AI on children. Further, these studies mainly focus on cultivating, nurturing, and nudging children towards technology use and design, without promoting critical perspectives towards AI. For instance, there is little discussion with children on the limitations, inherent biases, and lack of diversity in current design and development of AI, and on critical examination of the ethical aspects of technology use, design, inherent limitations, and consequences of these on children and society at large. In this talk, I will present my work on critical AI literacy with young children, sharing lessons from hands-on workshops with children in Finland, India, and Japan. 


2023 PhD Foley Scholar Award Finalists

Mar. 7, 2024

Karthik Seetharama Bhat, 2023 PhD Foley Scholar Award Winner
Human Centered Computing
Advisor: Neha Kumar 
Project Title: Envisioning Technology-Mediated Futures of Care Work

Arpit Narechania, 2023 PhD Foley Scholar Award Finalist
Computer Science
Advisor: Alex Endert
Project Title: Choropleth Maps: How They Can Trick You and What You Can Do About It

Sachin Pendse, 2023 PhD Foley Scholar Award Finalist
Human Centered Computing
Advisors: Munmun De Choudhury and Neha Kumar
Project Title: Computing for Mental Health Equity: Centering Identity and Power in Technology-Mediated Support

Allie Riggs, 2023 PhD Foley Scholar Award Finalist
Digital Media
Advisor: Anne Sullivan
Project Title: Designing with Ephemera: Queering Tangible Interaction in Archival Experiences

sang won lee

Exploring Dual Perspectives in Computer-mediated Empathy

Feb. 29, 2024
Sang Won Lee, Ph.D.
Assistant Professor in the Department of Computer Science at Virginia Tech

A common belief is that technology can play a pivotal role in enhancing individuals' capacity to empathize with others. While it is true, it's worthwhile to adopt an alternative perspective that underscores the inherent duality of empathy and emphasizes the empowering aspect for the recipients of empathy. In this talk, I will focus on recent projects that explore how technologies can facilitate empathy. These approaches primarily focus on those who need to be empathized and help them express, reveal, and reflect on themselves. Through these works, I propose a new framework that offers various research topics relevant to enhancing computer-mediated empathy. 

Mark Braunstein

An Introduction to Healthcare AI

Feb. 22, 2024
Mark Braunstein, Ph.D.
Professor of the Practice Emeritus at Georgia Tech, Scientist at the Australian eHealth Research Centre

Healthcare and AI have an intertwined history dating back at least to the 1960's when the first 'cognitive chatbot' acting as a psychotherapist was introduced at MIT.  Today, of course, there is enormous interest in and excitement about the potential roles of the latest AI technologies in patient care.  There is a parallel concern about the risks.  Will human physicians be replaced by intelligent agents?  How might such agents benefit patient care short of that?  What role will they play for patients. We'll explore this in a far-ranging talk that includes a number of real-world examples of how AI technologies are already being deployed to hopefully benefit those physicians and their patients. 

Ding wang

Whose Responsibility? The Case for Responsible Data Practice

Feb. 15, 2024
Ding Wang, Ph.D.
Senior Researcher at Google Research

Diversity in datasets is a key component to building responsible AI/ML. Despite this recognition, we know little about the diversity among the annotators involved in data production. Additionally, despite being an indispensable part of AI, data annotation work is often cast as simple, standardized and even low-skilled work. In this talk, I present a series of studies that aim at unpacking the data annotation process with an emphasis on the data worker who lifts the weight of data production. This includes interview studies to uncover both the data annotator’s perspective of their work and the data requestor’s approach to the diversity and subjectivity the workers bring; an ethnographic investigation in data centers to study the work practices around data annotation; a mixed methods study to explore the impact of worker demographic diversity on the data they annotate. While practitioners described nuanced understandings of annotator diversity, they rarely designed dataset production to account for diversity in the annotation process. This calls for more attention to a pervasive logic of representationalist thinking and counting that is intricately woven into the day to day work practices of annotation. In examining structure in which the annotation is done and the diversity is seen, this talk aims to recover annotation and diversity from its reductive framing and seek alternative approaches to knowing and doing annotation. 

meredith ringel morris

AGI is Coming… Is HCI Ready?

IC Distinguished Lecture

Feb. 1, 2024
Meredith Ringel Morris, Ph.D.
Director for Human-AI Interaction Research at Google DeepMind

We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI). 

In this talk, I will define what I mean (and what I do NOT mean) by “AGI," and my journey from AGI skeptic to believing we are within five to ten years of reaching this milestone. I will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild? 

I will close by reflecting on the relationship between HCI and AI research. Typically, HCI and other sociotechnical domains are not considered as core to the ML research community as areas like model building. However, I argue that research on Human-AI Interaction and the societal impacts of AI is vital and central to this moment in computing history. HCI must not become a “second class citizen” to AI, but rather be recognized as fundamental to ensuring the path to AGI and beyond is a beneficial one. 

oliver haimson

Trans Technologies

Jan. 25, 2024
Oliver Haimson, Ph.D.
Assistant Professor at University of Michigan School of Information

My forthcoming book Trans Technologies (MIT Press, 2025) examines the world of trans technologies: apps, health resources, games, art, AR/VR, and other types of technology designed to help address some of the challenges transgender people face in the world. My research team and I conducted in-depth interviews with more than 100 creators of existing trans technologies to understand the current landscape, highlight areas for future innovation, and build theory via community input around what it means for a technology to be a trans technology. This work illuminates the people who create trans technologies, the design processes that brought these technologies to life, and the ways trans people often rely on community and their own technological skills to meet their most basic needs and challenges. I will discuss how trans technology design processes are often deeply personal, and focus on the technology creator’s own needs and desires. Thus, trans technology design can be empowering because technology creators have agency to create tools they need to navigate the world. However, in some cases when trans communities are not involved in design processes, this can lead to overly individualistic design that speaks primarily to more privileged trans people’s needs. Further, I will discuss some of my research group’s ongoing participatory design work designing trans technologies.

Nazanin Andalibi

Emotion AI in the future of work

Jan. 18, 2024
Nazanin Andalibi, Ph.D.
Assistant Professor at the University of Michigan School of Information

Emotion AI, increasingly used in mundane (e.g., entertainment) to high-stakes (e.g., education, healthcare, workplace) contexts, refers to technologies that claim to algorithmically recognize, detect, predict, and infer emotions, emotional states, moods, and even mental health status using a wide range of input data. While emotion AI is critiqued for its validity, bias, and surveillance concerns, it continues to be patented, developed, and used without public debate, resistance, or regulation. In this talk, I highlight some of my research group's work focusing on the workplace to discuss: 1) how emotion AI technologies are conceived of by their inventors and what values are embedded in their design, and 2) the perspectives of the humans who produce the data that make emotion AI possible, and whose experiences are shaped by these technologies: data subjects. I argue that emotion AI is not just technical, it is sociotechnical, political, and enacts/shifts power – it can contribute to marginalization and harm despite claimed benefits. I advocate that we (and regulators) need to shift how technological inventions are evaluated. 

casey fiesler

Three Lessons Towards Ethical Tech 

IC Distinguished Lecture

Jan. 11, 2024
Casey Fiesler, Ph.D.
Associate Professor of Information Science at University of Colorado Boulder

Hardly a day passes without a new technology ethics scandal in the news — from privacy violations on social media to biased algorithms to controversial data collection practices. In computing practice and research, good intentions sometimes still lead to negative consequences. So what can we do as technologists, researchers, and educators? This talk describes three lessons from my research that inform ethical practices in studying, building, and teaching about technology: (1) remembering the humans present in data, towards ethical research practices; (2) unpacking ethical debt (as a parallel to technical debt) in technology design and research as the precursor to the types of unintended consequences that underly many controversies; and (3) a broader perspective on computing education that puts thoughtful critique of technology in everyone’s hands.