Musical Skin: A Dynamic Interface for Musical ... - Semantic Scholar

2 downloads 0 Views 634KB Size Report
1.1 Musical Expression as an Interface Between Performer and Audience ... the inner feeling of performer and corresponding to the reaction of audience is an.
Musical Skin: A Dynamic Interface for Musical Performance Heng Jiang1, Teng-Wen Chang2, Cha-Lin Liu3 1,2

Graduate School of Computational Design, National Yunlin University of Science and Technology 3 Department of Multimedia and Animation Art, National Taiwan University of Arts

{1,g9834706, 2 tengwen}@yuntech.edu.tw, 3 [email protected]

Abstract. Compared to pop music, the audience of classical music has decreased dramatically. Reasons might be the way of communication between classic music and its audience that depends on vocal expression such as timbre, rhythm and melody in the performance. The fine details of classic music as well its implied emotion among the notes become implicit to the audience. Thus, we apply a new media called dynamic skin for building up the interface between performers and audiences. Such interface is called “Musical Skin” is implemented with dynamic skin design process with the results from gesture analysis of performer/audience. Two skins-system of Musical Skin are implemented with virtual visualization/actuators/sensible spaces. The implementation is tested using scenario and interviews. Keywords: Dynamic skin, Sensing technology, Musical performance, Scenario

1

Introduction

In human computer interaction, we often focus on how computer amplifies human activities. Among those activities, music performance has smaller percentages due to its complex and emotional expression. Until now, only a few conferences or researches aim at exploring the musical expression with computers, namely NIME. However, musical expression did provide a strong interface that can touch and move the audiences’ heart. With the state-of-the-art technology in HCI, how to address this issue in terms of human computer interaction shall provide an effective interface for bridging the performer and audience. 1.1

Musical Expression as an Interface Between Performer and Audience

Compared to pop music, the audience of classical music has decreased dramatically. Reasons might be the way of communication between classic music and its audience that depends on vocal expression such as timbre, rhythm and melody in the performance. The fine details of classic music as well its implied emotion among the

notes become implicit to the audience. Thus, a good performer is the key to the music performance that cannot only perform the music according to the notes but also the emotion among them. With the far distance of stage, audience can only access the music based on its result not the process. Therefore, a dynamic interface for sensing the inner feeling of performer and corresponding to the reaction of audience is an interesting and valid HCI research. 1.2

New Media Interface: Dynamic Skin

For implementing a dynamic interface between performer and audience, we adapt the concept of dynamic skin from architectural research. Dynamic skin starts with a façade system that can react according to the environment surrounding the building. As the figure below (figure 1) is a dynamic tensile structure, a unit can be constituted by this movable tensile structure; while adding a microcontroller it can be controlled by programming to perform corresponding variations [1]. The responsible facade is obtained more and more attentions; the inventions of the sensor and the actuator just fulfilled the requirements for designers[2].

Fig. 1. Dynamic structure.

For example, FLARE (2009) is an architectural dynamic skin unit system developed by WHITHvoid in Berlin, Germany. It can clad the architecture and surface of wall, and control one unit by one metal sheet and cylinder. The computer manages system and sensors to communicate with outside and inside erections through ambient light, and lengthen and shorten FLARE unit by cylinder to arise visual effects as figure 2 [3].

Fig. 2. FLARE-façade.

With exploring the interface part of skin, Hyposurface provides another dimension to look at the performance of dynamic skin. Argis Hyposurface is a kind of responsive architectural dynamic skin unit driven angle and position of thin metal sheet on appearance to by individual mechanical structure. It convey the a type of wavy experience and feedback of dynamic skin through variation of position and form of whole thin metal sheet to create visual effect on curved surface as figure 3 [4].

Fig. 3. Aegis Hyposurface.

This structure of Hyposurface utilizes units to constitute the construction of wall and build dynamic skin with large area. After receiving signals, dynamic skin can do diverse change from point to face. The conceptual analysis of this case has enable structure of dynamic skin generating displays pattern or outline to attain wavy effect. 1.3 An Augmented Interface based on Dynamic Skin is Needed for Supporting the Communication between Performer and Audience With the dynamic skin technology including sensors and actuators, a new type of interface will not be limited by its physical form but be capable to sense the ability of performer and reactive feedbacks from audiences. The interaction technique of this dynamic interface is based on the input and output of the computational unit, namely it is accomplished by the signal transmitting mechanism between human and interface [5], illustrated by figure 4 below:

Fig. 4. The mechanism of signal transmission.

In this research, we are aiming at exploring an augmented interface based on the dynamic skin technology to support the communication between performer and audiences.

2

The Problem

For exploring the computational feasibility of an augmented interface using dynamic skin technology, the problem of this research is divided into two folds: how to convert sound-based expression (music) to physical expression (dynamic skin)? And how to ensure such two-way communication between performer and audience from the skin developed? By adapting the basic technology of dynamic skin: sensible structure and interaction design process, the approaches conducted in this research are (1) sensorbased approach for gathering the information from both performer and audience, (2) using interaction system design process to model the interactive communication between performer and audience. The first approach: augmented music expression in dynamic skin, sensor technology is used to collect possible data from both performer and audience during the performance. Hence the analysis and encoding over collected data, and then transform into visual performance shown on two skins—one is onstage, and another is façade. The Second approach: interaction design for modeling the interaction between performer and audience along with music itself. Using interaction design process, the each interactive process (music-skin-audience, performer-skin-audience, music-skinperformer) is designed and analyzed.

3

Analysis

Performers and audience’ physical changes will reflect onto their behaviors, such as skin temperature, heart beats, voice, body movements and so on. Since these onstage or offstage behavioral changes can be measured and analyzed, we can utilize this body information to interact with the skin. The responsive technology can be divided into sensing and actuators. The sensing technology transforms the space into a sensible space. Such spaces can then covey physical information such as chemical, biomass and so on into the possibility of reusable electronic devices and output signals. Sensing technology blurs the relationship between the physical and virtual information, and led us into an irregular, but the rich concept of innovative interface [6]. With approaches above, the correct mapping between behaviors and performrelated information is needed. Thus, we need to gather correct information mapping in three catalogues: performer, audience and dynamic skin structure design. 3.1

Performer-based

In this stage, we need to collect the mapping between music to emotional expression as well as the subtle changing variable of performers physically (due to the criteria sensor technology) while on stage. Interviews and behavior observation methods are conducted for understanding what kind of sensors can be used to collect desired information. Questionnaires over emotional expression are also conducted for analyzing the possible mapping. 3.2

Audience-based

On the audience side, how audience behaviors and reacts is the targeted mapping for study. Since classical music is often conducted in concert hall, the audience outside the concert hall will be the challenge for this stage. Two steps: the audience behaviors during the performance in the concert hall, and the reaction after the performance are observed and recorded. Additionally, unofficial interviews are conducted for exploring the mapping and further developing the interaction process for this research. 3.3

Dynamic skin structure

With the mapping and interaction process analyzed from previous stages, the structure of dynamic can then be designed and simulated before implementation. Three Steps: first, dynamic structure adjustment for containing the components developed in second step. Second step is to transform the mappings in designing the required sensors and connection wires for triggering the interaction and gathering the information from either performer or audience. Third step is to explore the connection with network due to the two-skin designed, The Mapping Mechanism as the table 1.

Table 1. Mapping used in the prototype: musical skin. Sensing  unit 

Sensor 

Item 

Computing  unit 

Signal  Volume 

Music 

Microphone 

Music 

Sonic  processing  unit 

Pitch 

Polar  Wearlink®31  heartbeat  sensor 

Heartbeat 

Polor  heartbeat  Receiver  HRMI 

ECG 

Temperature  sensor 

Temperature 

processing  unit 

Digital  signal 

Optical  sensor 

Movement 

processing  unit 

Digital  signal 

Voice sensor 

Voice 

processing  unit 

Analog  signal 

Performers 

Audience 

4

Feedback  On  onstage  and  on    façade:  unit  vertical  rotation  On  onstage  and  on  façade : unit  horizontal rotation  On  onstage  and  on  façade  : Light  color  (RedÅÆWhite)  On  onstage  and  on  façade:  Light Brightness  (BrightÅÆDark)  On  onstage  and  on  façade: Skin wave  On onstage:  Flash light  On façade:  Ambient light 

The Solution - Musical Skin

With the analysis from performer, audience and dynamic skin structure, we implement a prototype called “Musical Skins” that have double skins system: one is on stage and another is outside the concert hall, namely onstage skin and façade skin. Onstage skin collects the information from performer and music, and then outputs the feedback from audience. On the other hand, façade skin will collect the information from audiences and output the performance via physical skin and the music speaker. The system diagram of Musical Skins is shown in Figure 5.

Fig. 5. System diagram of Musical Skin.

4.1

System Design

Musical Skin is comprised of three parts (a) sensors collecting the signals, (b) computing unit processing collected signals and (c) output to dynamic skin for performance. Different input will generate different response from skin. The whole system diagram of signal/information flaw is shown in figure 5. The exemplary signal process is shown in two signals: heartbeat and sound sensor as followed input/computing/output. 1. Input: heartbeat and sound (volume and rhythm of music) signals from performer and music are used as the trigger for the system. Once signals received, system will start to collect the sound signals from music until it is done; for heartbeat, performer will carry wireless heartbeat sensor and system will receive the first 5 min of calm condition as initial data, and based on this initial data to distinguish the variation of heartbeat later on during the performance. 2. Computing: while sound signal received, computing unit will use Fast Fourier Transform (FFT) to analyze/encode the signal, and send them to the controller (arduino) of dynamic skin for actuate the performance of skin. Similarly, heartbeat signal from performer also went through analyzer and encoder before sending the skin to actuate the color change of RGB LEDs. (3) Output: Once signals are encoded and sent to the actuators, the different skin units will response to different signals received. The movement as well as lights is all part of skin units that will have act and perform just like the skin is the collection and reflection of all signals received from performer/audience/music. Onstage skin will react and give

feedbacks to performer and play as part of performance, and façade skin will incorporate with the speaker/music as part of visualization/musical expression of performance that will give feedbacks as well collecting information from audiences. 4.2

Sensor Components

Sensor components are classified in three groups: performer, music and audience. Different group will have different technology based on their activities. (1) Performer: the sensors for performers are based on their physical information while performing on stage; they are (a) heartbeat sensor, (b) temperature sensor and (c) optical sensor. Each sensor has its own wireless feature such that it will not disturb the performance. (2) Music: the sensor for music is sonic sensor, using microphone to collect the sound and computing unit as synthesizer to analyze/encode the signals. (3) Audience: voice sensor is used for sensing the volume of audience’s response and optical sensor for gesture of audience.

 

 

Horizontal rotation 

  Vertical rotation 

Fig. 6. Unit and unit movement.

4.3

Motion Components

The dynamic skin is comprised of a set of motion components (as shown in figure 6). Each unit is a pyramid with width 17.5 cm, length 17.5 cm and height 14 cm. 4 x 4 units form a cluster for dynamic interface (as shown in figure 7).

Fig. 7. Cluster for dynamic interface.

The cluster receives the signal from computing unit and mapping to the different performance of skin according to the rule-bases. The power source of dynamic skin is servo and each unit is controlled by two servos located at the back of unit. The controller used in this prototype is arduino that controls two servos for vertical and horizontal rotation of units (as shown in figure 8).

Fig. 8. Double servo motor control status of horizontal and vertical rotation.

5

Scenario

When we hear music playing, system captures the heartbeat, temperature and posture of audience and sense the playing music. Skin begins to perform this moment and units swing in up, down, right and left four directions by size of volume and variation of tune. It reinforces our feeling for admiring music tempo and strain. System sense heartbeat of performer turns into fast by spirited part of music, light on skin will convert from white into red to present performer’s emotion soaring. Skin system also sense the body moving of performer to wave and the red light become brighter to show the summit of emotion of music. After performance finished, audience are satisfied with and get up to applaud. System sense the sound of applauding and there glitter light on wall of hall. While performer sees the twinkle to realize this show is successful and encouraged by everyone to be willing to give more shows. People outside of performance hall will stop to appreciate due to the wonderful performance of musical skin.

6

Conclusion

This system discovers structure make skin interface act not obviously when testing and evaluation on the aspect of structure design. There some delays on serve as derivation of performance skin. This is for the sake of serve belongs to mechanical device which can’t communicate with skin directly, so the structure of unit must be improved.

6.1

Findings and Contribution

We found inaction problem in musical performance from observation of music admiration. To solve problem, the method is to design a dynamic skin correspond the music playing scenario. On observation and interview both sides arrange adaptive sensing technology on skin to make interface sensing and giving feedback. This all is for the target to do three communications of performance and audience, performer and dynamic skin, audience and dynamic skin so add these three communications into to create new user experience. We realized how take the process of design of dynamic skin as foundation in this study. There is a further promotion in design of structure and movement of skin via inference of musical skin to bring up a kind of design process of musical skin. 6.2

Future works

As current musical skin system, the interface still stays in testing stage and perhaps can’t correspond to emotion performer want to convey to audience. Consequently, we will keep going to amend unit of sensing and structure/appearance of interface to do better communication between performer and audience.

References 1. Sterk, T.: Using Actuated Tensegrity Structures to Produce a Responsive Architecture. In: ACADIA, pp. 85-93. (Year) 2. Hu, C., Fox, M.: Starting From The Micro A Pedagogical Approach to Designing Responsive Architecture (2003) 3. WHITEvoid interactive art & design, http://www.flare-facade.com/ 4. Goulthorpe, M.: AEGIS HYPOSURFACE©. Routledge (2001) 5. O'Sullivan, D., Igoe, T.: Physical Computing: Sensing and Controlling the Physical World with Computers. Thomson Course Technology (2004) 6. Tsai, C.-C., Liu, C.-L.C., Chang, T.-W.: An Interactive Responsive Skin for Music Performers, AIDA. In: Beilharz, K., Johnston, A., Ferguson, S., Chen, A.Y.-C. (eds.) Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), pp. 399-402. the University of Technology Sydney, Sydney, Australia (2010)