the body in\verse
Alan Macy, Mark-David Hosale, Alysia Michelle James
The body in\verse is an online, interactive performance that combines biophysical sensing, emotive state sonification and visualization, and generative poetry to create the scene. The performance provides a deep dive from the world outside of ourselves, that is dissociated by mediated technology, into the interoceptive abyss of our emotive sea. Audience members are invited to participate in a focussed conversation that becomes the basis for the activity that follows. Questions will be on the rise of a technological culture, and how it has left us wanting, consciously or not, for identification and awareness of “essential rhythm”, that we continue to lose track of now that we live mostly in cities as the aboriginal environment recedes from view. The performance environment provides the ability to control the presentation of stimuli and monitor the physical reaction based on the interpretation of nuanced emotional state, blurring the line between auditory and visual real-time content and physical experience.
A biophysical sensing system measures the emotional affect of the performer, and then uses that data to drive the sound, abstract imagery, and a generative poetry algorithm. Emotional affect of the performer is assessed through arousal and valence measures derived from correlation of the performer’s heart rate and heart rate variability. An algorithm generates poetry using conversations that take place with the audience as source material. The poetry source material is then algorithmically organized according to its sentiment (positive to negative), and mapped to the emotional affect of the performer driven by the emotional affect assessment from the biophysical measures as described above.
What happens?
What the audience sees and hears...
How they participate...
We begin with a conversation.
The performer (Alysia Michelle James) is close to the screen, visible from the shoulders up, like a typical teleconference call. As the audience joins the call, she begins with some ice-breaker questions such as:
“How are you today?,”
“Where are you located?,” etc.
Audience members are able to answer these questions through the chat. The questions slowly become more poignant as the performer builds upon their responses.
“Why are you here?”
If an audience member responds with, “I’m here for a meaningful experience,” the performer may respond with a question like, “What is meaningful?”
Other examples of questions in this category:
“What is the most important thing we could be thinking about right now?” “Why be ‘new normal’?”
“Who were you before?”
The perspective moves outward in with questions such as: “Are you worried about the fragility of society?”
“Are you worried about the fragility of life?”
“What makes a culture?”
“What does society avoid feeling?”
Then we shift to inward thinking with the intent of inspiring contemplation and introspective thought: “Why do we feel?”
“What do you avoid feeling?”
“Can you feel other humans?”
“Can you feel your breathing?” “Can you feel your heart?”
This more focused conversation becomes the foundation for the tone and direction for the rest of the performance. The remaining performance is comprised of three elements:
- A biophysical sensing system that measures the biophysical state of the performer, and then uses that data to drive the sound, abstract imagery, and a generative poetry algorithm.
- An AI algorithm that generates poetry.
- A performance that combines movement, sound, abstract imagery, and text.
Biophysical Sensing
The science behind this project is based on work first performed by James Russell, “A Circumplex Model of Affect” (Russell 1980). This work has been cited roughly 14,000 times. Since 1980, psychophysiologists have continued to evolve theory, in regard to the assessment of feeling “affective” state, and so additional physiological variables have been utilized for evolving studies. These measures include electroencephalogram, pulse plethysmogram, blood pressure, blood flow, vascular resistance and ventilation.
Recent research in bioinformatics suggests that it is possible to assess the real-time emotional state of an individual using a special class of sensors that track human characteristics such as heart rate, muscular movement, eye movement, skin temperature and breathing (Picard 2002) that contribute to an individual’s emotional valence (range of affect from pleasant to unpleasant) and arousal (range of excitement from activation and deactivation) (Cacciopo 2000; Scherer 2005; Chanel, et. al 2006; Stickel, et. al. 2009; Nicolaou, et. al. 2011; Koelstra, et. al. 2012).
Valance and arousal chart from James Russell, 1980, “A Circumplex Model of Affect.”
In this work we use a real-time electrocardiogram via a Biopac Bionomadix BN-ECG2 (https://www.biopac.com/product/bionomadix-2ch-ecg-amplifier/).wireless data acquisition and analysis platform to establish core valence and arousal measurements to find a baseline affective state assessment of the performer. We ran the output of this amplifier straight into a sound interface capable of measuring down to DC (zero frequency). Valence measures are indexed by the performer’s heart rate variability (HRV) and establish the horizontal model axis ranging from displeasure to pleasure. Arousal measures are indexed by the performer’s heart rate (expressed in beats per minute) and establish the vertical model axis ranging from low energy (calmness, boredom) to high energy (excitement, alarm). The correlation of valence and arousal helps determine the assessment of emotive state, with high valence and high arousal correlating to excitement, low valence and high arousal to anger, low valence and low arousal depression, and high valence and low arousal being serene.
It is through works such as the body in\verse that valence and arousal data of a performer can be used to develop co-collaborative applications that help increase our somatic awareness and mediate the bi-directional emotive connection of a performer with an audience, other performers, and interactive computational systems.

- Cacciopo, JT, Berntson GG, Larsen JT, Poehlmann KM, Ito, TA (2000) The psychophysiology of emotion. In Handbook of Emotion, 2nd Edition ( Eds: R Lewis, J M Haviland-Jones), Guilford Press, New York, pp 173-191.
- Chanel, Guillaume, Julien Kronegg, Didier Grandjean, and Thierry Pun. "Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals." In Multimedia content representation, classification and security, pp. 530-537. Springer Berlin Heidelberg, 2006.
- Koelstra, Sander, Christian Muhl, Mohammad Soleymani, Jong-Seok Lee, Ashkan Yazdani, Touradj Ebrahimi, Thierry Pun, Anton Nijholt, and Ioannis Patras. "Deap: A database for emotion analysis; using physiological signals." Affective Computing, IEEE Transactions on 3, no. 1 (2012): 18-31.
- Nicolaou, Mihalis A., Hatice Gunes, and Maja Pantic. "Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space." Affective Computing, IEEE Transactions on 2, no. 2 (2011): 92-105.
- Picard, Rosalind W. "Affective medicine: Technology with emotional intelligence." Studies in health technology and informatics (2002): 69-84.
- Russell, James A. "A circumplex model of affect." Journal of personality and social psychology 39, no. 6 (1980): 1161.
- Stickel, Christian, Martin Ebner, Silke Steinbach-Nordmann, Gig Searle, and Andreas Holzinger. "Emotion detection: application of the valence arousal space for rapid biological usability testing to enhance universal access." In Universal Access in Human-Computer Interaction. Addressing Diversity, pp. 615-624. Springer Berlin Heidelberg, 2009.

Generative Poetry
The generative poetry algorithm is based on a layering of text analysis techniques, and driven by the emotional affect assessment from the biophysical measures as described above. The basis for is an artificial intelligence algorithm such as: https://www.writerswrite.com/poetry/poem-generators/, that uses keywords that map to the biophysical state of the performer to generate poetic text.
Keywords are drawn from audience responses in the conversation at the beginning of the performance using a speech-to-text algorithm and used as prompts for AI generated poetry using TensorFlow and GPT-2. The prompt provide the beginning of the poem and the AI completes it. For example, if the audience member states, “"doing something I love," the AI might respond, “doing something I love – which in my life I only get – from myself – and from it – out of the blue.” This is combined with a prewritten poem that is first spoken by the performer, and then by the computer - but cut-up, in the spirit of William S. Burroughs’ Cut-Up technique. The cut-up prewritten poem and the AI generated poem are combined in a call and response between two computer voices. The inspiration for this approach for generating text is inspired by the combinatorial literature techniques as employed by OuLiPo (Ouvroir de littérature potentielle; roughly translated: "workshop of potential literature"). Through this system we can create a complex and rich system for generating a perceptually endless territory of poetic results.
Bringing it all together
As the questions become more contemplative, the audience responses are captured and presented on screen. The emotions the performer is experiencing during the performance are also presented on screen via an interface on the lower right side of the screen.

Up until this point the performance is solely focused on the performer’s face, when the last question is answered she begins to move and slowly create space between her and the camera she begins a dance performance and recites the prewritten poem:
Enter the mind and meet the truth that lives there Don’t accept the limitations society has accepted Break free of a paradigm Extend yourself, develop a new sense Embodied knowing, human connectivity New forms of association New ways of seeing New ways of being Recognize the world beyond language Visualize the invisible Embody the future Make the world
Once this has been completed the generative – computer spoken poetry begins. There is a feedback loop between the generative poetry algorithm and the performer that is mediated by a biophysical sensing interface. as the generative poetry begins to fade in.
The performer’s movements and actions are influenced by the AI’s poetry, which is being influenced by the emotive analysis of the performer based on her physiological state. The responses will be made in movement, dance and breath while the Bionomadix continues to collect data and generate poetry based on the movements and the heightened state of arousal of her body.
The dynamic performance showcases many emotions through-out the performance. The movement portion of the performance then ends similarly to the way it began – a slow approach to the camera with extra emphasis on breathing as she catches her breath. Her eyes looking directly at the camera and she attempts to calm her breath and heart rate with audible breathing inviting the audience to participate in the rhythm of the breath. She asks again, “How are you feeling?”
The performance ends with a Q&A during which all three artists will respond to questions and offer information about the formation of this project.
Artist's Bios

Alan Macy Alan Macy (www.alanmacy.com) is currently the R&D Director and a cofounder of Biopac Systems, a biomedical company. Macy is also the founder of the Santa Barbara Center of Art, Science and Technology (www.sbcast.org), a live/work residency and arts laboratory. Macy designs data collection and analysis systems, used by life science researchers, that help identify the meaning of signals produced by life processes. He has 35+ years of product development experience in human physiological monitoring. His recent research efforts explore ideas of human nervous system extension and the associated influences upon perception. As an applied science artist, he specializes in the creation of cybernated art, interactive sculpture and environments.
Mark-David Hosale Mark-David Hosale (www.mdhosale.com, www.ndstudiolab.com) is a computational artist and composer who has given lectures and taught internationally at institutions in Denmark, The Netherlands, Norway, Canada, and the United States. He is an Associate Professor in Computational Arts in the School of the Arts, Media, Performance, and Design, Toronto, Ontario, Canada. His solo and collaborative work has been exhibited internationally at such venues as the SIGGRAPH Art Gallery (2005), International Symposium on Electronic Art (ISEA2006), BlikOpener Festival, Delft, The Netherlands (2010), the Dutch Electronic Art Festival (DEAF2012), Biennale of Sidney (2012), Toronto’s Nuit Blanche (2012), Art Souterrain, Montréal (2013), and a Collateral event at the Venice Biennale (2015), Currents New Media (2017), among others. Mark-David’s work explores the boundaries between the virtual and the physical world. His practice is varied, spanning from performance (music and theatre) to public and gallery-based art. His interdisciplinary practice is often built on collaborations with architects, scientists, and other artists. Prominent ongoing collaborations exist with the IceCube South Pole Neutrino Observatory with Rob Allison and Jim Madsen; and in Performance, Art and Cyber-Interoceptive Systems (PACIS), with Erika Batdorf, Kate Digby, and Alan Macy.
Alysia Michelle James Alysia Michelle James is a composer, movement artist and aerialist. After completing a Bachelors of Arts in Music Composition at the College of Creative Studies at UC Santa Barbara, she pursued a niche career as a Composer-Aerialist and has performed internationally to original and collaborative music, most notably with William Close and the Earth Harp Collective as both an instrumentalist and aerialist. James completed her Masters of Music Composition in 2020 at California State University Long Beach where she experimented with microtones and researched the connection between music and dance. She currently teaches “Music For Dance” at UC Santa Barbara.