AI 'Performers' Take Center Stage and Get Creative with People in Public Spaces

<p>AI performers (clockwise): dancer, improv comedian, storyteller, music maker</p>

AI performers (clockwise): dancer, improv comedian, storyteller, music maker

Researchers at Georgia Tech are seeking to improve “artificial intelligence literacy” and give people opportunities to engage directly with AI systems in order to understand the potential and capabilities of the technology.

AI-assisted tech is increasingly common, but actions by these autonomous programs are often hard to spot in people’s daily use of devices and online services.

Georgia Tech’s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them in public spaces. These AIs have included a dance partner, visual storyteller, music maker, and comedic improv performer.

“There are common misconceptions about what AI is, what it is capable of, and how it works,” said Brian Magerko, professor of digital media and director of the Expressive Machinery Lab. “AI systems in public spaces that can engage as active participants in co-creative activities have the potential to serve as avenues for AI literacy. We believe this work pushes these efforts forward considerably.”

The exhibitions involving live interactions between people and AIs – what the researchers call co-creative experiences – have taken place across the country since 2013 at academic conferences, art festivals, museums, and other venues.  

The multi-year endeavor has resulted in a design blueprint developed by the researchers that shows how to build AI experiences for public spaces where audiences or performers can create with an AI partner.

“Museums and other public spaces can serve as alternative venues for AI literacy initiatives, complementing formal education and broadening access to opportunities to interact with and learn about AI by both adults and children who may not have AI devices in their homes or schools,” said Duri Long, human-centered computing Ph.D. student at Georgia Tech and a researcher involved in the work.

Researchers encountered challenges unique to making “creative AIs”, such as how to build systems that engage people with different tastes, AIs that perform over sustained periods of time, and AIs being able to adapt to unpredictable human behavior.

For example, the AI dance partner, known as LuminAI and the oldest of the group, doesn’t have fingers so any naughty hand gestures aren’t processed in the AI’s dance routine.

“Our AI agents are unlike many other AIs, which usually have a specific task to accomplish,” Long said. “Our work involves open-ended co-creative AI installations where there is not a single clear goal or other reward function to optimize the AI’s behavior. Our AIs are meant to create or collaborate with a human counterpart, and that looks different every time.”

While AIs in general often have large databases of sensor data (images, temperature readings, etc.) to improve their understanding of the world, in creative areas such as dance, theater, and other performing arts there is limited data from which AIs can pull.

The researchers overcame this in part by having their AIs learn from human partners in real-time and decide what might be a suitable action. For professional performers, who want a greater degree of control, they could perhaps take turns with the AI partner to have a more structured performance. Conversely, an AI as part of a museum exhibit might guide participants on how to start an activity in order to engage people early on.  

Social interaction was also important to consider and, counter to some technology trends, the researchers discovered that human-to-human interaction could increase as a result of AI involvement.

LuminAI, the dancing AI, prompted a couple to do the salsa, two friends to start a synchronized dance routine, and a group of teenagers to perform in a dance circle.

The comedic AI in the roster, called Robot Improv Circus, allows an audience to watch someone interacting in VR with the AI agent and provide feedback to the person by using voice prompts and gestures to trigger in-game reward systems. This led to several groups of friends encouraging each other to try different actions with the comedic AI.

The research was published in the Proceedings of the Creativity & Cognition Conference 2019. The paper Designing Co-Creative AI for Public Spaces was co-authored by Duri Long, Mikhail Jacob, and Brian Magerko.

AI 'Performers' Take Center Stage and Get Creative with People in Public Spaces
AI 'Performers' Take Center Stage and Get Creative with People in Public Spaces
News Contact

Joshua Preston
Research Communications Manager
GVU Center and College of Computing