Sonic Interaction Design – Week 1


We plan to make the interaction sounds for an intelligent device that steers the energy consumption and makes you be more sustainable. The device has different modes (Temperature, Air quality, electronic devices, Light). Through a specific Interaction (shaking, rubbing, throwing, breathing, holding/squeezing…) you can ask for the actual state, for example, how many lights are on or which electronic device is on and not used right now (the device gives you a acoustic feedback). Afterwards, you have the opportunity to react to this information through another interaction and for example, switch off all the lights that are not used right now.


To get a sense of possible directions we analyzed what type of company people are or could get used to in their home.

Personality of the device may be similar to:

  • Pet
  • Cat (Searching for attention, unberechenbar)
  • Dog (unterwürfig, treu)
  • Bird (chatty)
  • Robot/Machine (Hero, strong)
  • Partner (affectionate, funny, helpful)
  • Grandmother/-father (wise)
  • Child (cute, curious, chatty)
  • Coach (instructive, teacher-like)
  • Friend (Helpful, Always there if you need him)


Possible behaviors or interactions of the cube could be:

  • Searching for attention
    • „Can I help you?“: When a person comes back home, greeting & asking
    • „Play with me, let’s do something!“: When a person is doing/ not doing something for a long time
    • „Hello / Goodbye“
  • Indication (Informing, Warning):
    • „You are using too much energy, you know.“: Nagging/complaining
    • „Status is OK / relax / snooze”
    • „Alert! There is a thief in the house!“ : Emergency Situation alarming
    • „The mail/message/package arrived.“ : Joyful notice
    • „Help me / I have a problem!“
  • In-hand interaction:
    • Picking up: „Oh, yeah I’m ready“ (Ready for an action)
    • Holding: „I’ll tell you about the problem“ (Load information)
    • Shaking: „Solve the problem? no problem!“ (Do some smart thing)
    • Rubbing: „Some other info about your home…“ (Change the info)
    • Spinning: „Haha! Let’s have fun!“ (Showing something a person can do at home)
    • Hugging: „Ohhh… I love you, too.“ (Making relaxing atmosphere)
    • Pressing: „Hello / Bye Bye“
    • Tapping: „Okay, I’ll shut up.“ (short tap)/ „Hey, don’t hit me.“ (strong hit)
  • Feedback after interaction/Self-noise making:
    • Putting down: „Okay, I think you are done“. (Mumbling sound going down)


At the end of the day, we analyzed each of the mentioned personality and checked with what behavior they might fit most.
For our concept, we think that a robot like R2D2 could suit our interests very well. He communicates only by sounds and easily conveys emotions, needs or problems. He does it in a way that the person is not annoyed, but rather thankful for the information and feedback that the droid gives. This is exactly the aesthetic we would like to achieve. For rather annoying sounds, we could use cat sounds, which on the other hand are often intuitively understood by humans.


After we collected our first sounds and got the inspiration for our Wizard-Of-Oz prototype, we started to modulate our samples. We used a tiny part of our angry cat sounds and edited the pitch and multiple effects. This helped us to generate a few first messages which we defined using our storyboard:

We tried to define our sounds based on the flow of sentences. Short sounds may be associated with „Hello“, or „Ok“, whereas longer sounds can be questions depending on the pitch at the end of the sentence.

We then realized that these sounds only work to a certain extent. The course of the sound was not specific enough.
Inspired by techniques of R2D2, where some sounds were spoken and then edited with an effect, we tried the same procedure to create a second set of sounds. To our disadvantage, we found out, that editing spoken words until they are unrecognizable is harder than we thought at first. Especially if the outcome should suit our imagination.


After our second mentoring, and Simons tip to get more inspired by the human voice, we recorded some vocal sounds. We just talked some sentences, imagining the device could talk:

„Hey“ / „Hello“ = Searching for attention
„The light in the kitchen is still on“ = giving an information
„Could you switch off the light in the kitchen?“ = request to do something
„Shall I switch off the light?“ = offering help
„Ok“ / „Done“ = mission fulfilled

After that, we edited the sounds with different effects to give it a more digital and robotic character. Because we didn’t have too much time till our presentation, we couldn’t find the effects, that suit our ideas we had.

Even for us, it was quite hard to interpret the sounds we created during the presentation and for the other listeners who heard the sounds for the first time, it was even harder.

From the feedback, we took that we should even more focus on the affordance of the sounds.
We decided to create a collection of sounds and then make some user-testing with them to find out what sounds are associated with which interaction.