16.05.2017 – Evaluation and Re-Evaluation
- “Interaction criticism: three readings of an interaction design, and what they get us”, Bardzell, J., Bolter, J., & Löwgren, J. 2010. In Interactions. 17:2. 32–37.
- “Usability evaluation considered harmful (some of the time)”, Greenberg, S., & Buxton, B. 2008. In Proceedings of CHI ’08.
- “What do usability evaluators do in practice?: an explorative study of think aloud testing”, Nørgaard, M., & Hornbæk, K. 2006. In Proceedings of DIS ‘06.
- “Introducing Evaluation”, Preece, J., Rogers, Y., & Sharp, H. 2002. In Interaction Design. Wiley.
- “Staying open to interpretation: engaging multiple meanings in design and evaluation”, Sengers, P., & Gaver, B. 2006. In Proceedings of DIS ‘06.
Exercise in class
We started this lesson with an exercise. The day before, we received the following task from Joëlle: prepare the criteria you think could be used to evaluate the objectives of your group project for the IAD process class.
We asked ourselves:
- How much can other designers benefit from our work?
- What have we personally learned?
- Have we learned new things in terms of relationships?
- Does this project have a good impact on our becoming of designers?
- Have our participants benefited?
- Do our video, concept and blog really convey our thoughts and ideas?
After we shared these questions, we evaluated and answered the questions of another group. In our case, it was group room. We then answered the following three questions, they asked themselves:
- Have we provoked people in their lives?
Shaën, Vinz and I thought that we agreed with the first part of this question. They have provoked! The only thing we noticed, was that they did not really convey their scenarios as real life situations. The viewer had rather the impression of people in some sort of parallel universe.
- Was our installation biased?
Pro: We liked that the installation was forcefully „guided“ by technology. Only one of four objects was visible at a time, and after a time loop this post changed.
Neg: The viewer did not have the possibility to interact. This was sad in some cases. Only one prototype had an interaction within.
- How leveled was the installation?
The installation was in our opinion negatively weighted. Since their goal was to make an almost even, yet a little negative installation, I think they achieved their goal.
Joëlle’s input on the evaluation
First, she taught us the scientific paradigm.
- test/study double-blind
- evaluation (measuring tool, objective metrics, stats)
- publication (peer-reviewed)
And then suggested, to „mix and match“ with the following.
Joëlle advised us, to use one of the following and „mix and match“ it with the methodology of engineering, which is very similar to interaction design. We should try to invent our own methods, and not stick too much to the ones we learned in class. Always be creative about our own evaluation and experiment!
We then categorized different methodology terms in before, during and after.
- before is always about setting objectives!
- Personal experience, goals, related work. evaluating ideas, intuitions, hypothesis, field research, interviews, desk-based research(sources, data, precedents, related work), discussing with peers and advisors, co-design, participatory design.
- during means setting controlled experiments: criteria and rules!
- User testing, or generally testing the idea, iteration. prototyping, documenting, technical aspects. time checking/retro planning! (plan backward) going back to the field, enacting, storytelling, discussing with peers and advisors.
- after, we ask ourselves, have we achieved our goals?
- Reflection, goals achieved? (evaluating reaching objectives, missing objectives, contributions) study analysis, stats, typology of users, quotes, report/publication/exhibition/dissemination, methods, lessons learned, guidelines/toolkit.