How choreography can help robots become alive

Consider this scene from the 2014 movie, Ex Machina: A young nerd, Caleb, is in a dark room with a half-naked femmebot, Kyoko. Nathan, a brilliant, drunk roboticist stumbles and abruptly tells Caleb to dance with the bot Kyoko. To start, Nathan presses a panel on the wall and the bedroom lights suddenly change to a menacing red, while Oliver Cheatham’s classic album “Get Down Saturday Night” starts playing. Kyoko – who seems to have done this before – speechlessly begins to dance, and Nathan joins his robotic creation in an intricate pelvic impulse choreography. The scene suggests that Nathan imbued his robot creation with disco functionality, but how did he choreograph the dance in Kyoko and why?

Ex Machina may not answer these questions, but the scene points to an emerging area of ​​robotics research: choreography. Definitely, choreography is making decisions about how bodies move in space and time. In the dance sense, choreography is to articulate movement patterns for a given context, generally optimizing for expressiveness rather than utility. To be in tune with the choreography of the world is to be aware of how people move and interact in complex and technology-filled environments. Choreo-roboticists (that is, choreographically working roboticists) believe that incorporating dancing gestures into machine behaviors will make robots look less like industrial devices and, instead, more alive, more empathetic and more attentive. This interdisciplinary intervention could make it easier to live and work with robots – a feat not small, given its proliferation in consumer, medical and military contexts.

Although the concern with the movement of bodies is central to both dance and robotics, historically, disciplines have rarely overlapped. On the one hand, the western dance tradition is known to maintain a generally anti-intellectual tradition that poses great challenges for those interested in interdisciplinary research. George Balanchine, the acclaimed founder of the New York City Ballet, told his dancers, “Don’t think, dear, do it.” As a result of this type of culture, the stereotype of dancers as servile bodies that are better seen than heard, unfortunately, has been calcified a long time ago. Meanwhile, the field of computer science – and robotics by extension – demonstrates comparable, though distinct, bodily problems. As sociologists Simone Browne, Ruha Benjamin and others have demonstrated, there is a long history of emerging technologies that transform human bodies into mere objects of surveillance and speculation. The result has been the perpetuation of racist and pseudoscientific practices such as phrenology, humor reading software and AIs that pretend to know if you are gay by the appearance of your face. The body is a problem for computer scientists; and the overwhelming response from the field has been technical “solutions” that seek to read bodies without significant feedback from their owners. That is, an insistence for the bodies to be seen, but not heard.

Despite the historical division, it may not be an exaggeration to consider roboticists as choreographers of a specialized type and to think that the integration of choreography and robotics could benefit both fields. Usually, the movement of robots is not studied in terms of meaning and intentionality as it is for dancers, but roboticists and choreographers are concerned with the same fundamental concerns: articulation, extension, strength, form, effort, effort and power. “Roboticists and choreographers aim to do the same thing: understand and convey subtle choices in movement within a given context,” writes Amy Laviers, certified movement analyst and founder of the Robotics, Automation and Dance Laboratory (RAD) in a recent National Laboratory Article financed by the Science Foundation. When roboticists work choreographically to determine the robot’s behavior, they are making decisions about how human and inhuman bodies move expressively in each other’s intimate context. This is different from the utility parameters that tend to govern most robotics research, where optimization reigns supreme (does the robot do its job?), And what the movement of a device means or makes someone feel that has no apparent consequences.

Madeline Gannon, founder of the AtonAton research studio, leads the field in her exploration of the robot’s expressiveness. Its installation commissioned by the World Economic Forum, Manus, exemplifies the possibilities of choreo-robotics both in its brilliant choreographic consideration and in its achievements of innovative mechanical engineering. The piece consists of 10 robot arms displayed behind a transparent panel, each fully illuminated and brightly illuminated. The arms resemble the techno-dystopian film production design as Ghost in the Shell. These robot arms are designed to perform repetitive work and are usually deployed for utility issues, such as painting car chassis. Even when Manus is activated, its robotic arms do not incorporate any of the repetitive rhythms expected from the assembly line, but instead appear alive, each moving individually to interact animatedly with its surroundings. Depth sensors installed at the base of the robots’ platform track the movement of human observers through space, measuring distances and responding iteratively. This tracking data is distributed throughout the robotic system, functioning as a shared vision for all robots. When passersby move close enough to any robot arm, it will “look” closer by tilting its “head” in the direction of the stimuli and then approaching to engage. Such simple and subtle gestures have been used by puppeteers for millennia to imbue animus objects. Here, it has the cumulative effect of making Manus they look curious and very alive. These small choreographies give the appearance of personality and intelligence. They are the functional difference between a random row of industrial robots and the coordinated movements of intelligent packaging behavior.

.Source