Sonic Interaction Design – Week 2

Monday

After the feedback we got to our life-performance, we decided to Make some user testings. That we recorded some new different versions of sounds very intuitively. Then we edited them with the vocoder and with the massive synthesizer. Three different collections of sounds came out.

Tuesday

This morning we listened to all the sounds we produced together and made a selection of the sounds we want to take for the user testing.

The sounds were numbered from 1 to 18 and we had three users from different majors to test our sound out. After that, we collected the sounds that had the similar feedbacks and took the pictures of the result. Then we tried to figure out the possible scenarios with these sounds, trying to find out the possible interactions and the information that we can have with our concept. (The pictures of the user testing and its result is below.)

(Informations)

  • Getting attention: sound 6(come_on_1), sound 16(cold_1)
  • Notification of the information: sound 5 (whistle_2)
  • Light: sound 4(whistle_1), sound 17(light_1)
  • Electricity: sound 14(electricity_1)
  • Pushing (demanding) the action from user: sound 18 (light_2)
  • General talking: sound 1 (messaging_1)

(Interactions)

  • Lifting: sound 18 (light_2)
  • Tapping: sound 9 (acknowledge_1)

Wednesday

Before the morning, we analyzed the result sounds from the user-testing. While doing that, we could find the similarities in the tone and the characteristics of the collected sounds. We did the analysis in three categories: Information, Interaction, and Other possibilities. the following pictures show the specific demonstration of our sounds which are grouped in common.

For the next step, we are going to compose the language of the cube with our sounds, using our ’sound blocks‘ to create the sentence of the information. Then also with the basic interaction, we will try to make a simple base-sound with different frequencies that can tell users about the proceeding status of the cube.

First ’sound blocks‘ or segments could be composed using the following samples:
talking sound (messaging_1) + notification (whistle_2) + light (light_1) + demanding an action (light_2)

 

Further tryouts:

Basic Sound with different frequences.
Because it would need more time than we have in this course to work out the following sounds, we decided to go further with the sounds we allready used for the usertesting and edit the choosen ones a bit.

Thursday

In the morning muchi and Ju recorded the Video in Ju’s flat while Alessa was doing the Poster.
For the Video we decided to include an off voice that documents the interaction with the device.
This methode gave us the opportunity to explain different possible secenario’s in one scene and helped us to keep the video short.

For the Design of the Poster, we decided to work with the cube, in which we explain what the device is (The Motivation behind, the concept aswell as the characteristic and his language). On the bottom we went deepli into our process and writed dow the things that were expected.

In the afternoon we were cutting the video and made the sounds (off-voice) for the video.

Sonic Interaction Design – Week 1

Tuesday

We plan to make the interaction sounds for an intelligent device that steers the energy consumption and makes you be more sustainable. The device has different modes (Temperature, Air quality, electronic devices, Light). Through a specific Interaction (shaking, rubbing, throwing, breathing, holding/squeezing…) you can ask for the actual state, for example, how many lights are on or which electronic device is on and not used right now (the device gives you a acoustic feedback). Afterwards, you have the opportunity to react to this information through another interaction and for example, switch off all the lights that are not used right now.

Wednesday

To get a sense of possible directions we analyzed what type of company people are or could get used to in their home.

Personality of the device may be similar to:

  • Pet
  • Cat (Searching for attention, unberechenbar)
  • Dog (unterwürfig, treu)
  • Bird (chatty)
  • Robot/Machine (Hero, strong)
  • Partner (affectionate, funny, helpful)
  • Grandmother/-father (wise)
  • Child (cute, curious, chatty)
  • Coach (instructive, teacher-like)
  • Friend (Helpful, Always there if you need him)

Behaviours/Questions:

Possible behaviors or interactions of the cube could be:

  • Searching for attention
    • „Can I help you?“: When a person comes back home, greeting & asking
    • „Play with me, let’s do something!“: When a person is doing/ not doing something for a long time
    • „Hello / Goodbye“
  • Indication (Informing, Warning):
    • „You are using too much energy, you know.“: Nagging/complaining
    • „Status is OK / relax / snooze”
    • „Alert! There is a thief in the house!“ : Emergency Situation alarming
    • „The mail/message/package arrived.“ : Joyful notice
    • „Help me / I have a problem!“
  • In-hand interaction:
    • Picking up: „Oh, yeah I’m ready“ (Ready for an action)
    • Holding: „I’ll tell you about the problem“ (Load information)
    • Shaking: „Solve the problem? no problem!“ (Do some smart thing)
    • Rubbing: „Some other info about your home…“ (Change the info)
    • Spinning: „Haha! Let’s have fun!“ (Showing something a person can do at home)
    • Hugging: „Ohhh… I love you, too.“ (Making relaxing atmosphere)
    • Pressing: „Hello / Bye Bye“
    • Tapping: „Okay, I’ll shut up.“ (short tap)/ „Hey, don’t hit me.“ (strong hit)
  • Feedback after interaction/Self-noise making:
    • Putting down: „Okay, I think you are done“. (Mumbling sound going down)

Conclusion

At the end of the day, we analyzed each of the mentioned personality and checked with what behavior they might fit most.
For our concept, we think that a robot like R2D2 could suit our interests very well. He communicates only by sounds and easily conveys emotions, needs or problems. He does it in a way that the person is not annoyed, but rather thankful for the information and feedback that the droid gives. This is exactly the aesthetic we would like to achieve. For rather annoying sounds, we could use cat sounds, which on the other hand are often intuitively understood by humans.









Thursday

After we collected our first sounds and got the inspiration for our Wizard-Of-Oz prototype, we started to modulate our samples. We used a tiny part of our angry cat sounds and edited the pitch and multiple effects. This helped us to generate a few first messages which we defined using our storyboard:

We tried to define our sounds based on the flow of sentences. Short sounds may be associated with „Hello“, or „Ok“, whereas longer sounds can be questions depending on the pitch at the end of the sentence.



We then realized that these sounds only work to a certain extent. The course of the sound was not specific enough.
Inspired by techniques of R2D2, where some sounds were spoken and then edited with an effect, we tried the same procedure to create a second set of sounds. To our disadvantage, we found out, that editing spoken words until they are unrecognizable is harder than we thought at first. Especially if the outcome should suit our imagination.

Friday

After our second mentoring, and Simons tip to get more inspired by the human voice, we recorded some vocal sounds. We just talked some sentences, imagining the device could talk:

„Hey“ / „Hello“ = Searching for attention
„The light in the kitchen is still on“ = giving an information
„Could you switch off the light in the kitchen?“ = request to do something
„Shall I switch off the light?“ = offering help
„Ok“ / „Done“ = mission fulfilled

After that, we edited the sounds with different effects to give it a more digital and robotic character. Because we didn’t have too much time till our presentation, we couldn’t find the effects, that suit our ideas we had.

Even for us, it was quite hard to interpret the sounds we created during the presentation and for the other listeners who heard the sounds for the first time, it was even harder.

From the feedback, we took that we should even more focus on the affordance of the sounds.
We decided to create a collection of sounds and then make some user-testing with them to find out what sounds are associated with which interaction.