Research couples co-creativity with AI to develop unique collaborations
Your next dance partner might not be a person.
A new project from the Georgia Institute of Technology allows people to get jiggy with a computer-controlled dancer, which “watches” the person and improvises its own moves based on prior experiences. When the human responds, the computerized figure or “virtual character” reacts again, creating an impromptu dance couple based on artificial intelligence (AI).
The LuminAI project is housed inside a 15-foot-tall geodesic dome, designed and constructed by Georgia Tech digital media master’s student Jessica Anderson, and lined with custom-made projection panels for dome projection mapping.
The surfaces allow people to watch their own shadowy avatar as it struts with a virtual character named VAI, which learns how to dance by paying attention to which moves the current user (and everyone before them) is doing and when. The more dance moves it sees, the better and deeper the computer’s dance vocabulary. It then uses this vocabulary as a basis for future improvisation.
“You hear music and follow the sound into a large geodesic dome, and you see a figure appear on the walls,” says Anderson.
The concept behind LuminAI is to install three instances of the Viewpoints AI system inside of a geodesic dome to create a 3D shadow theatre space that can be used as both a performance technology and as a playful public installation space.
Anderson designed and constructed the installation with the support of the Goat Farm Arts Center; Mikhail Jacob is the software lead, and Brian Magerko is the faculty advisor for the project. The diagrams below are Anderson's early concept illustrations of the interaction within the dome.
“When you move, you notice that the figure responds with its own gestures. It starts to mirror you, then learns your movements,” continues Anderson. “The more you play, the more it learns, and the more fun you have improvising dance with this virtual partner.”
According to Anderson, the installation asks questions about sociality and interaction, but it also intentionally asks questions about collaboration between the fields of technology and art.
LuminAI was featured within a dance and technology performance, in a work called Post. T. Lang Dance performed set choreography with avatars and virtual characters within the dome.
Post is the fourth and final installment of Lang’s Post Up series, which focuses on the stark realities and situational complexities after an emotional reunion between long lost souls.
The dome itself creates a shadow theater space for dance-reactive interaction with the dancers and the virtual characters.
Post is the culmination of a series of works that “investigates historical pasts and brings them to a historical present through working with designers,” according to T. Lang. “I was interested to find the way technology could possibly translate to spirituality.”
Post was a finalist in the Field Experiment competition, which was part of the Hambidge Art Auction and Gala at the Goat Farm Arts Center. This collaborative project was unveiled for the first time at the Hambidge Art Auction in partnership with the Goat Farm Arts Center.
“Co-creative artificial intelligence, or using AI as a creative collaborator, is rare,” said Brian Magerko, the Georgia Tech digital media associate professor who leads the project. “As computers become more ubiquitous, we must understand how they can co-exist with humans. Part of that is creating things together.”
The system uses Kinect devices to capture the person’s movement, which is then projected as a digitally enhanced silhouette on the dome’s screens. The computer analyzes the dance moves being performed and leans on its memory to choose its next move.
“This episodic memory is filled with experiences of how people have danced with it in the past,” said Mikhail Jacob, a computer science Ph.D. student and lead developer of the LuminAI technology. “For example, the computer learns to predict that when one person pumps their arms into the air, their partner is likely to do something similar. So on seeing that movement, the avatar might pump its arms sideways at the same pace or use that as the basis for its response.”
The team says this improvisation is one of the most important parts of the project.
The avatar recognizes patterns, but doesn’t always react the same way every time. That means that the person must improvise too, which leads to greater creativity all around. All the while, the computer is capturing these new experiences and storing the information to use as a basis for future dance sessions.
“Humans aren’t fully in the driver’s seat anymore. The process gives autonomy back to the computer,” said Jacob. “LuminAI forces a person to create something new — potentially something better — with their partner because they’re forced to take their (virtual) partner’s actions into consideration.”
The technology has broader implications than art. As Magerko explains it, these days AI mostly relies on instructions fed to it by humans, and programming a computer with every possible instruction is impossible.
“That’s because humans are so unpredictable,” says Magerko. “Let’s say a computer and a person are going to write a story together about a family conversation at a restaurant. The story could go in a typical fashion or veer wildly into novel territory. The computer won’t do well unless it has been programmed with all of the pieces of knowledge that the story could possibly contain. However, if it can learn that knowledge from people and prior experiences, its improvisation can become somewhat consistent and accurate and the AI learning new story content (or dance moves) becomes part of the user experience.”