SOUND MIRROR


SCI6338: Intro to Computational Design I Fall 2019 
Advisor: Jose Luis García del Castillo y López
Teammates: Beilei Ren & Haoyu Zhao


Project Overview

We listen to others all the time. But when is the last time you listen to the sound from your inner self? “Sound Mirror” is a design experiment that visualizes sound we create in real time into parametric geometries, which arouses a closer look into the inner emotions of the participants. In most time, we see our face peaceful and calm, with our emotions hidden deep below. While talking, or playing music to the “Sound Mirror”, the participant is seeing his/her facial appearance distorted with different sound input. Here we take sound as a medium to disclose the inner emotions that can hardly be noticed. Welcome to the world of “Sound Mirror” to see the whispers and hear the faces.

WeChat Image_20191025190247.png

Sound Input and Parametric Geometry

The first part of the project is a parametric design of sound visualization. Real time sound input from PC microphone is recorded as sound-waveform. Then, to visualize the waveform into parametric geometries, three variables are selected from the sound input: peak note Pn, peak frequency Pf and wavelength l, which are shown in the following waveform graph. Theses three variables are then used as parameters controlling the height, the cell size and the rotation angle of the visualized geometries.

soundwave.png

Responsive Parametric Geometry

The sound visualization is responsive to real-time sound input. A sequence of parametric forms are generated with continuously sound input, engaging time into the experience as a forth dimension of the parametric design.

responsive geometry.png

Face Images Input and Distorted Projections

The second part of the project experiments on projecting face images onto the parametric geometries of sound visualization. In the following example, a portrait photo of Beilei is projected on to a geometry generated from an angry sound piece, also performed by her. The result image become fragmented, or exploded into irregular configuration of pieces, which translates the emotional information from auditory input to expressive visual representation.

face_projection.png

The same approach of projecting face images on to parametric geometries is applied with various sound input performed with different emotions. Three conditions are shown here: default condition with no sound input; highly fragmented configuration with angry sound input; regular and elevated configuration with happy sound input. However, in this short experiment, not all emotions can be represented in distinctive and recognizable geometries. Further work is needed to continue optimizing the algorithm to represent subtle emotions in speech.

face_projection2.png

Tangible Sound Forms

Parametric Geometries generated previously with selected sound input are 3d printed into tangible sound forms. High refectivity material is painted on the top surfaces to created theses distorted mirrors, which create distorted reflections of the participants. When a participant is holding a sound mirror, a distored selfie of her/himself is displayed in the sound mirror, arousing the awareness of the emotion hidden in the sound that usually hardly noticed.

DSC_0721.jpg

Sound Mirrors

Parametric Geometries generated previously with selected sound input are 3d printed into tangible sound forms. High reflectivity material is painted on the top surfaces to created theses distorted mirrors, which create distorted reflections of the participants. When a participant is holding a sound mirror, a distorted selfie of her/himself is displayed in the sound mirror, arousing the awareness of the emotion hidden in the sound that usually hardly noticed.

SOUNDMIRROR.png

Real-time Face Recognition with Kinect

To further develop the experiment, we would like to connect real-time sound input with real-time face image input, as the last step of the technological process. With Kinect, the face of the participant sitting right in front can be detected, and the real-time image can be imported into the algorithm, making real-time projection possible. Here are the sequential representation that appear with Kinect face recognition.

realtime_face_projection.png

Responsive Interface and User Test

The last part of the project is user test with real-time sound and face image input. In this case, Haoyu performs two types of sound input: speech - a Borges poem, and music-a Bach Piece. Both speech and music input are considered as a description of the conditions of inner emotions of Haoyu. The result of this user test is a responsive interface that can be played as a digital sound mirror, providing a chance for the participants to reflect inner emotions and also to immerse in this cross-perceptional fantasies.

responsive_interface.png