Friday, March 27, 2009

Don Norman gets all Emotional

Emotional Design: Donald Norman


In this book Don Norman goes out to cover how designers can design with the emotions of users in mind. He talks about a number of ways designers can approach and affect the visceral, behavioral and reflective levels. He also talks about emotions in robots both as a human interface and as an AI goal/reward system.


I like that Norman acknowledges the large role that emotional connection plays in how people use objects, and more obviously how they feel about how well those objects/devices work.


Just as Norman says, it not very often that people use the reflective part of their mind, and as such it is rare that people truly use logic to make decisions in real life. Often people are doing things where there is not enough time or information to make a decision based solely on logic, and that is where emotion plays a large part.


Say you are looking to buy a new laptop. If you went in to the process just looking for a complete comparison based numbers and price you would likely go crazy with all the options presented. But then you decide that one laptop looks better than another. Perhaps it is pleasing to look at, this could signify that someone put a lot of thought into the construction. Maybe they keyboard feels good, something that might be important when using it to write a long paper. Maybe when you pick it up, it feels sturdy, possibly indicating good construction. All of these aspects affect how you feel about the laptop, and while these feelings may not be indicative of reality, they are often good heuristics to live by.


Of course if we make the jump and say that emotions are just subconscious, self-adjusting mental heuristics, then we could say that Don Norman's vision of robots with emotions is already a reality. Modern AI systems already have rudimentary learning systems that update heuristics that in essence act in the same way as emotions.


My final thought is on Norman's suggestion that we use emotion as output from systems as a way to quickly convey messages to people. This is pretty much the best idea in the book. Taking advantage of the emotional connotations that people place on everything from colors, sounds and facial expressions, we could create a subconscious vocabulary that would create an efficient way for machines to communicate their status to people.


Say your computer is running out of hard drive space, then perhaps a small face or something representing your computer in the corner of your screen begins to look worried, with its expression getting more severe as the problem gets worse. Now obviously the expressions would have to fit the situation, otherwise a person would just get irritated at the false positives and turn the system off. But at a quick glance the system could quickly inform you as to the status of your machine with more detailed information available on request.

Tuesday, March 24, 2009

Shocking

The Man Who Shocked The World
by Thomas Blass

This book is a biography of Stanley Milgram, the famed experimental psychologist. As far as biographies goes it was certainly fairly standard, with a mix of interesting character analysis and typical (if bland) biographical details.

I find Milgrams ideas and experimental methods interesting, but I don't know if I feel very sure as to whether his experiments were the best way to empirically prove research points. While his shock experiments were certainly enlightening, it seems unclear as to what exactly he thought this would prove about obedience. It showed that within the circumstance of a procedure perceived to be some sort of experiment people would be obedient to a distressing degree, but he did not seem to dig deeper to find out why people acted like that and whether the results could be reproduced in different sorts of situations.

Milgram most certainly shocked the world, but what he meant to do with his controversial experiments is still somewhat unclear to me.

Thursday, March 12, 2009

Robot control: wiimotes


Exploring the Use of Tangible User Interfaces for Human-Robot Interaction: A Comparative Study
Cheng Guo, Ehud Charlin. University of Calgary

This paper was a comparison of the efficiency in controlling a robot using two different types of input devices: a set of buttons (aka a keyboard) and a 3d Tangible User Interface (aka a pair of Wii-motes).

There were two tasks that users had to perform for the study using either a keyboard system or the wii-mote+nunchuck system to control a sony AIBO. The first was to navigate through a maze, the second was to make the robot pose in a variety of ways.

Overall the researchers found that the Wii-motes worked slightly better than the keyboard for the navigation task and much better for the posing task. This is likely because there was more of a direct conceptual mapping from the wii-mote motions to the actual motions of the AIBO.

Additionally people preferred the Wii-motes because they generally felt it was easier to remember what commands did what.

In summary I would say that it makes sense to use an input device that has similar constraints to what you are trying to do. If you are controlling something in three dimensions you want something that senses 3d control input. If you are performing a task in 2d then all you need is 2d control ability.