Sunday, January 30, 2011

The worst user interface I have used.

One of the worst user interfaces I have used is that of the 3d modeling program, Blender 3d. I absolutely love using the program, but only because I have spent a long time learning its peculiarities, fighting it and looking up instructions.

(Blender 3d 2.5 beta default UI layout)

Non Standard Interface
The first thing you notice when starting up Blender is how completely different the interface looks than any other program on your operating system, whether Mac, Windows or Linux. It is not entirely clear if there was one original inspiration for the UI widgets used by Blender, but it is clear that they were built from the ground up for the program. This leads to behavior such as a typical horizontal spinner UI elements that requires the user to Shift-click on it to be directly
editable. Though this Shift-click to edit behavior is consistent across most of the interface
widgets, it does not follow any established widget interaction conventions and thus is not
intuitive even to people who are used typical window based GUIs.
(An example of a Shift-Click spinner. In previous versions of the program shift-clicking would be the only way to be able type it a specific value)

Speaking of windows, Blender throws out the entire window convention, opting for a single window with re-arrangeable sub panels. It is easy to understand why you would not want to have to contend with layers of floating windows cluttering your screen while trying to move quickly through many different functionality modes. What is not understandable is the
arbitrary solution that Blender comes up with to handle sub panels. To create a new panel, say for a new view of the 3d editing space, the user must "peel off" a new
panel by dragging it from the corner of an existing panel. This is easy enough to do, and users often inadvertently create new panels constantly while trying to resize existing ones. But to get rid of these new panels is tricky as there is no visible way to get rid of them. Most panel or window systems use the convention of a small 'x' icon or the sort to indicate where you click to get rid of a panel. Blender requires you to click and hold onto the same area you clicked to create the panel, then drag to merge the panel with a neighboring panel.

(An example of the "tear-away" corner of a panel. Disregard the "plus" sign, it does something different)

I appreciate that it is occasionally necessary to develop completely new interfaces to handle complex or new interactions. There is no reason, however, to throw away classic tropes of user interfaces when they have been shown to be flexible. Re-inventing the wheel is not always necessary.

Steep Learning Curve
Typically there is a tradeoff between how easy it is to learn to use a program and how complex the functionality of that program can be. This is no different in 3d modeling programs, where it is typical for users to spend a long time mastering the interface and functionality. Blender, however takes the high learning curve to an extreme. The interface itself provides very little hint of how to use it, is minimalistic on visible controls and expects the user to primarily use keyboard shortcuts. Keyboard shortcuts are not a bad means of control if they are consistent and have some sort of mnemonic mapping, but keyboard shortcuts in Blender change functionality depending on which mode you are in, operate with multiple modifier keys and seldom make mnemonic sense. This all adds up to an imposing learning experience that can turn away new users, whether they are experienced with other 3d programs or are new to modeling and are interested in learning the ropes. It is particularly dissapointing that complete newbies must face such an imposing learning curve since Blender is one of the few high quality free modeling programs, the perfect price for budding artists trying to learn the ropes.



Despite all of these problems, Blender eventually makes an odd sort of sense for me to use. Perhaps I suffer from Stockholm syndrome.

Best user interface I have used.

The best interface I have used is the interface of my second generation iPod touch. Functionality is pretty self evident, the user model is simple, the input mappings are either natural feeling finger motions or explicitly written out and above all it is (still) fun to use.


Clear Input Mappings
The biggest advantage of using a multitouch screen with (almost) no physical input buttons is that mappings between inputs are necessarily limited. If the application designer wants to have the user interact with the interface they must either put a button on the screen that performs a specific purpose or use one of the well established finger gestures that are present throughout the system, such as two finger pinch for zooming or one finger vertical and horizontal swipe for scrolling. This leaves little room for the ambiguous input mappings that can be found in systems with more physical buttons. As I have more recently moved to an Android mobile device this has become more clear, as the functionality of the hardware buttons often changes from application to application and even within the context of the core operating system.

Simple User Model
The simplicity of the user model in the iPod touch is another strong point. By this I mean it is clear what is going on with the system when you are using it. If you press on the button of an application it turns on and you use it. If you press the home button the program goes away and you do not have to worry about whether the application is still running, whether you saved what you were doing, whether you quit it properly. All of that is handled by the application and OS, with the only exception to this function being the music player that can run in the background. Again this is in comparison to my recent Android device, where it is never quite clear what state a program will be in when you return to it. Did it quit when you left it? Is it still running and consuming battery? Did it crash? These questions are simplified in the single application/process at a time user model of the non-multitasking version of iOS.

Fun to Use
Finally one of the things that impressed me the most when I first started using the iPod touch was how fun it was to interact with things with your fingers. This was compounded by the smooth and logical way things moved. I honestly spent hours idly scrolling between the home screens on the device because the movement was very satisfying. The kinematic scrolling introduced to handle scrolling through long lists was similarly satisfying. Scrolling through a long music library looking for a particular song was easy and actually fun, where performing such a task on a traditional ipod or laptop would be a long, imprecise process.

CHI Project Ideas

Here are a few ideas from the January 27th brainstorming session:

#1
Augmented Reality Virtual Interaction Space -

You use fiduciary markers to demarcate a virtual interaction space on a surface, most likely a table. You then use networked mobile devices to place virtual objects on the surface and interact with them. For example you could place a document on the surface with one device, then pick it up with another device, or interact with it on the surface.

Basically microsoft surface but virtual and displayed using augmented reality on multiple mobile devices.

The purpose of the system is to be able to create a virtual collaborative workspace on any flat surface, a real world table for example. The user's mobile devices would then act as "portals" into the virtual space by using augmented reality. If a user is a few feet away from the table and holds up their phone/pad, they see the real table but with virtual documents placed all over the table. As they move closer the documents become bigger and bigger until they place their device directly on top of where the virtual document resides in the physical space. The effect would be like putting a small picture frame over a 8.5x11 sheet of paper. The user can then pull the virtual document into their device, if they wish, and take it off of the virtual workspace. Similarly the user may take a document that resides on their device and place it into the shared virtual workspace.


The user may also edit documents directly in the virtual workspace using their device. For example, a user holds their ipad over a picture that they have placed on the workspace. They can now edit that picture through the ipad. If they wish to edit only a portion of the image they can "picture frame" a particular section of the image by placing the iPad directly onto the surface. If on the other hand they want to edit the whole image at once, they hold the ipad a little bit further away from the surface so that they can see more of the image through the picture frame, effectively getting a zoomed out view of the workspace.


This is basically an embodied 2d virtual workspace using mobile devices as both the viewer and the means of interaction.


As far as implementation goes, this would all be done by demarcating your virtual workspace on a real table with AR markers (fiduciary markers) and combining the visual location of these markers with they gyroscope in an iPhone. I do not know if this would be really possible, a subset probably is. But hey, dream big.



#2
Predictive Theories for finger/thumb interactions on tablets -

Manoj pointed out that he has had trouble finding good predictive theory on using thumb and finger interactions on tablets. This would be likely studying hand grips on tablets, sizes of tablets, how people have to hold them to perform certain tasks and what sort of limitations this places on their ability to interact with the software on the tablets.