Spring 2024 Speakers

gvu lunch lectures
sumita sharma

Critical AI literacy with children: in pursuit of fair and inclusive technology futures
 

Mar. 14, 2024
Sumita Sharma, Ph.D.
Postdoc Researcher in HCI, University of Oulu

ABSTRACT
Children interact with Artificial intelligence (AI) in various direct and indirect ways, yet, there is limited research on the impacts of AI on children. Further, these studies mainly focus on cultivating, nurturing, and nudging children towards technology use and design, without promoting critical perspectives towards AI. For instance, there is little discussion with children on the limitations, inherent biases, and lack of diversity in current design and development of AI, and on critical examination of the ethical aspects of technology use, design, inherent limitations, and consequences of these on children and society at large. In this talk, I will present my work on critical AI literacy with young children, sharing lessons from hands-on workshops with children in Finland, India, and Japan. 

karthik

2023 PhD Foley Scholar Award Winner

 

Envisioning Technology-Mediated Futures of Care Work


Mar. 7, 2024
Karthik Seetharama Bhat 
Human Centered Computing
Advisor: Neha Kumar 

ABSTRACT
Caregiving is a universal activity that is receiving increasing attention among technologists and researchers in the wake of the COVID-19 pandemic. Emerging technologies like conversational AI, augmented and virtual reality, and smart homes have all been described as potentially revolutionary technologies in care work, intended to automate and transform the overall care experience for caregivers and care recipients. However, such promises have yet to translate to successful deployments as these technological innovations come up against socioculturally situated traditions of care work that prioritize human connection and interaction. In this talk, I will share empirical studies looking into how formal care workers (in clinical settings) and informal care workers (in home settings) reconcile technology utilization in care work with sociocultural expectations and norms that dissuade them. I will then discuss possible technology-mediated futures of care work by positing how emerging technologies could best be designed for and integrated into activities of care in ways that unburden care workers while ensuring quality care. 

arpit

2023 PhD Foley Scholar Award Finalist

 

Choropleth Maps: How They Can Trick You and What You Can Do About It


Mar. 7, 2024
Arpit Narechania
Computer Science
Advisor: Alex Endert 

ABSTRACT
When creating choropleth maps, mapmakers often bin (or group) quantitative data values into bins (or groups) to help show that certain areas fall within a similar range of values. For instance, a mapmaker may divide counties into high, middle, and low life expectancy. Yet, different binning methods (e.g., natural breaks, quantile) yield different groupings, wherein the same data can be presented differently depending on how it is split into bins. This flexibility can sometimes be (mis)used by journalists to present (false) narratives or by fund managers to (inappropriately) seek additional funding. To mitigate against these dangers, we built a new geospatial visualization tool, Exploropleth. This system lets users upload their own data and interact with the outputs of 18+ established data binning methods, and subsequently compare, customize, and export custom maps. Feedback from cartographers and geographic information system experts highlighted the system’s potential to educate students as well as mapmakers. 

sachin pense

2023 PhD Foley Scholar Award Finalist

 

Computing for Mental Health Equity: Centering Identity and Power in Technology-Mediated Support


Mar. 7, 2024
Sachin Pendse
Human Centered Computing
Advisors: Munmun De Choudhury and Neha Kumar 

ABSTRACT
Online platforms and AI-based tools increasingly play a core role in how people create meaning from experiences of distress and engage with care. For example, large language model chatbots, online support communities, and personalized resources from search engines may all help an individual to contextualize their experiences of distress and find life-saving support. Technology-mediated support is thus often framed as a powerful means to reduce widespread mental health disparities and close care gaps. However, my research has demonstrated that offline inequities are paralleled in online contexts, further making it difficult for marginalized people to access care. The ability to meet diverse needs with technology-mediated support requires a deep understanding of how social inequities and technology design may together impact lived experiences with distress and care. In this talk, I present my work leveraging computational and qualitative approaches to understand these sociotechnical inequities, across diverse geographic contexts and online platforms. Building on this research, I outline my research vision for how we may consider identity and power in mental health intervention and technology design, towards acceptable and effective care for all people.  

allie riggs

2023 PhD Foley Scholar Award Finalist

 

Designing with Ephemera: Queering Tangible Interaction in Archival Experiences


Mar. 7, 2024
Allie Riggs
Digital Media
Advisor: Anne Sullivan

ABSTRACT
Amidst widespread efforts to diminish queer existence from public society, access to stories of queer identities, communities, and histories is vital. Particularly, understanding queer archives provides a lens through which we can critically reflect on what “counts” as recorded knowledge or data and how it holds weight in our conceptions of history. In queer archives scholarship, ephemera—or, material traces not traditionally collected in institutions—can provide powerful, affective links to gaps in the historical record. In this talk, I discuss my work in tangible interaction design with archival ephemera that speak to marginalized, queer histories. I ask how designing with ephemeral materials in tangible embodied experiences can prompt critical reflections on the past, inspiring alternative configurations of bodies, feelings, and histories. Further, I ask how designing with ephemeral materials contributes to queering human computer interaction (Queer HCI), deepening our understandings of tangible embodied interaction, and inspiring alternative interpretations of history.  

sang won lee

Exploring Dual Perspectives in Computer-mediated Empathy
 

Feb. 29, 2024
Sang Won Lee, Ph.D.
Assistant Professor in the Department of Computer Science at Virginia Tech

ABSTRACT
A common belief is that technology can play a pivotal role in enhancing individuals' capacity to empathize with others. While it is true, it's worthwhile to adopt an alternative perspective that underscores the inherent duality of empathy and emphasizes the empowering aspect for the recipients of empathy. In this talk, I will focus on recent projects that explore how technologies can facilitate empathy. These approaches primarily focus on those who need to be empathized and help them express, reveal, and reflect on themselves. Through these works, I propose a new framework that offers various research topics relevant to enhancing computer-mediated empathy. 

Mark Braunstein

An Introduction to Healthcare AI
 

Feb. 22, 2024
Mark Braunstein, Ph.D.
Professor of the Practice Emeritus at Georgia Tech, Scientist at the Australian eHealth Research Centre

ABSTRACT
Healthcare and AI have an intertwined history dating back at least to the 1960's when the first 'cognitive chatbot' acting as a psychotherapist was introduced at MIT.  Today, of course, there is enormous interest in and excitement about the potential roles of the latest AI technologies in patient care.  There is a parallel concern about the risks.  Will human physicians be replaced by intelligent agents?  How might such agents benefit patient care short of that?  What role will they play for patients. We'll explore this in a far-ranging talk that includes a number of real-world examples of how AI technologies are already being deployed to hopefully benefit those physicians and their patients. 

Ding wang

Whose Responsibility? The Case for Responsible Data Practice
 

Feb. 15, 2024
Ding Wang, Ph.D.
Senior Researcher at Google Research

ABSTRACT
Diversity in datasets is a key component to building responsible AI/ML. Despite this recognition, we know little about the diversity among the annotators involved in data production. Additionally, despite being an indispensable part of AI, data annotation work is often cast as simple, standardized and even low-skilled work. In this talk, I present a series of studies that aim at unpacking the data annotation process with an emphasis on the data worker who lifts the weight of data production. This includes interview studies to uncover both the data annotator’s perspective of their work and the data requestor’s approach to the diversity and subjectivity the workers bring; an ethnographic investigation in data centers to study the work practices around data annotation; a mixed methods study to explore the impact of worker demographic diversity on the data they annotate. While practitioners described nuanced understandings of annotator diversity, they rarely designed dataset production to account for diversity in the annotation process. This calls for more attention to a pervasive logic of representationalist thinking and counting that is intricately woven into the day to day work practices of annotation. In examining structure in which the annotation is done and the diversity is seen, this talk aims to recover annotation and diversity from its reductive framing and seek alternative approaches to knowing and doing annotation. 

meredith ringel morris

AGI is Coming… Is HCI Ready?
 

IC Distinguished Lecture
 

Feb. 1, 2024
Meredith Ringel Morris, Ph.D.
Director for Human-AI Interaction Research at Google DeepMind

ABSTRACT
We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI). 

In this talk, I will define what I mean (and what I do NOT mean) by “AGI," and my journey from AGI skeptic to believing we are within five to ten years of reaching this milestone. I will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild? 

I will close by reflecting on the relationship between HCI and AI research. Typically, HCI and other sociotechnical domains are not considered as core to the ML research community as areas like model building. However, I argue that research on Human-AI Interaction and the societal impacts of AI is vital and central to this moment in computing history. HCI must not become a “second class citizen” to AI, but rather be recognized as fundamental to ensuring the path to AGI and beyond is a beneficial one. 

oliver haimson

Trans Technologies
 

Jan. 25, 2024
Oliver Haimson, Ph.D.
Assistant Professor at University of Michigan School of Information

ABSTRACT
My forthcoming book Trans Technologies (MIT Press, 2025) examines the world of trans technologies: apps, health resources, games, art, AR/VR, and other types of technology designed to help address some of the challenges transgender people face in the world. My research team and I conducted in-depth interviews with more than 100 creators of existing trans technologies to understand the current landscape, highlight areas for future innovation, and build theory via community input around what it means for a technology to be a trans technology. This work illuminates the people who create trans technologies, the design processes that brought these technologies to life, and the ways trans people often rely on community and their own technological skills to meet their most basic needs and challenges. I will discuss how trans technology design processes are often deeply personal, and focus on the technology creator’s own needs and desires. Thus, trans technology design can be empowering because technology creators have agency to create tools they need to navigate the world. However, in some cases when trans communities are not involved in design processes, this can lead to overly individualistic design that speaks primarily to more privileged trans people’s needs. Further, I will discuss some of my research group’s ongoing participatory design work designing trans technologies.

Nazanin Andalibi

Emotion AI in the future of work
 

Jan. 18, 2024
Nazanin Andalibi, Ph.D.
Assistant Professor at the University of Michigan School of Information

ABSTRACT
Emotion AI, increasingly used in mundane (e.g., entertainment) to high-stakes (e.g., education, healthcare, workplace) contexts, refers to technologies that claim to algorithmically recognize, detect, predict, and infer emotions, emotional states, moods, and even mental health status using a wide range of input data. While emotion AI is critiqued for its validity, bias, and surveillance concerns, it continues to be patented, developed, and used without public debate, resistance, or regulation. In this talk, I highlight some of my research group's work focusing on the workplace to discuss: 1) how emotion AI technologies are conceived of by their inventors and what values are embedded in their design, and 2) the perspectives of the humans who produce the data that make emotion AI possible, and whose experiences are shaped by these technologies: data subjects. I argue that emotion AI is not just technical, it is sociotechnical, political, and enacts/shifts power – it can contribute to marginalization and harm despite claimed benefits. I advocate that we (and regulators) need to shift how technological inventions are evaluated. 

casey fiesler

Three Lessons Towards Ethical Tech 
 

IC Distinguished Lecture
 

Jan. 11, 2024
Casey Fiesler, Ph.D.
Associate Professor of Information Science at University of Colorado Boulder

ABSTRACT
Hardly a day passes without a new technology ethics scandal in the news — from privacy violations on social media to biased algorithms to controversial data collection practices. In computing practice and research, good intentions sometimes still lead to negative consequences. So what can we do as technologists, researchers, and educators? This talk describes three lessons from my research that inform ethical practices in studying, building, and teaching about technology: (1) remembering the humans present in data, towards ethical research practices; (2) unpacking ethical debt (as a parallel to technical debt) in technology design and research as the precursor to the types of unintended consequences that underly many controversies; and (3) a broader perspective on computing education that puts thoughtful critique of technology in everyone’s hands.