Showing posts with label CSCE 624. Show all posts
Showing posts with label CSCE 624. Show all posts

Thursday, September 16, 2010

624 #8 $N Multistroke Recognizer - Anthony

Introduction
$N is a recognizer based on wobbrock's $1 recognizer, extending it and giving it several new abilities. The most important extension is that $N is now a multistroke recognizer. This is achieved by connecting the endpoints of the multiple strokes together to form a unistroke and then interpolating all the different possible ways the multiple strokes could be connected. This in essence treats the user's whole gesture, both when their pen is drawing on the screen and when it is in the air, as a unistroke. The second improvement is that $N introduces bounded rotation invariance so that it can distinguish between gestures that have been rotated if necessary. The third change is that $N can now recognize the difference between 1D and 2D gestures. Finally there are a few optimizations that are included, the first is using the start angle of the gesture to constrain the search space and the second is by using the number of strokes to to further restrict the search space.

Discussion
$N adds several useful changes to $1 that makes $1 more flexible without losing very much of the simplicity of $1. What i like the most is the way in which they decided to bring multistroke gestures down to a single stroke in order to use the existing methods of $1 to analyze them.

624 #7 Sketch Based Interfaces - Sezgin

Introduction
The Segzin Stahovich Davis system is meant to take sketch data and clean it up and recognize basic shapes within a drawing. There are several steps to this process. The first is vertex detection which requires filtering out noise to find vertexes on straight edge objects. The second is detecting curves and drawing/approximating them with bezier curves. Finally the figures are beautified and then basic object recognition.

Discussion
This paper gives a fair amount of description about the vertex and curve detection processes, however it does not give very many specifics on beautification or recognition or for that matter what sort of recognition applications this system would be used for. I can only assume it was an early paper that is further expounded upon later, or that most of the other concepts were explored sufficiently in the related work.

624 #6 Protractor - Lee

Introduction
Protractor is a modified version of $1 recognizer by Wobbrock that is mean to decrease the memory and processing requirements for unistroke gestures so that gesture recognition is feasible on systems with less capable hardware, such as mobile devices like Android phones. Li introduces changes to the classification algorithm that measures the total angular distance between pre-processed templates and a new un classified gesture. This angle measure is enhanced by calculating the optimal angle to rotate the template by so that it best matches the unknown gesture.
Additionally protractor employed a simple method of orientation variance by breaking the orientation up into 8 cardinal directions.

Discussion
Protractor is a practical implementation of gesture recognition for mobile phones. I think that mobile applications are the most obvious use for simple gestures and having an efficient and simple gesture recognition system makes including gestures into mobile applications much easier for developers.

624 #5 $1 Recognizer - Wobbrock

Introduction
Wobbrock's $1 recognizer provides a lightweight, quick and easy gesture recognizer that can be used without requiring much technical knowledge on the developer's end. The recognizer works by taking one initial gesture for each template then comparing the distance in points in an unclassified gesture with each template and determining which matches most closely. To make certain that factors such as gesture speed and sampling speed do not affect recognition the gestures are re-sampled to between 32-256 points that are linearly interpolated from the original gesture data. The indicative angle of the gesture (the angle between the first point and the centroid) is then found and the gesture is rotated by that angle so that all examples of that gesture no matter their orientation can be compared. The points are then scaled to fit a bounding square and the distance between each point is then calculated and turned into a 0-1 score. $1 is therefore rotation, scale and position invariant.

Discussion
I very much like the idea of the $1 because it would be easy to implement in any programming context quickly. Too often advanced computing concepts, such as gesture recognition, require specific libraries and are limited to specific platforms which limits their usefulness and hampers finding new applications of the technology. I hope to implement the $N extension of the $1 recognizer so I can see exactly what it is capable, particularly on a handheld system to accept finger input.

Tuesday, September 7, 2010

624 #3 Gesture Design Advice: Chris Long

Introduction
In this paper the authors put forward quill, a tool for designing gestures for use in pen based applications. quill lets the designer build gestures, then advises the designer (user) as to whether the gestures they have created are easy to distinguish, both for the gesture recognition algorithm and for the humans who will eventually use the software.

Discussion
The most interesting idea not in this paper is the algorithm for determining which gestures might be confused by users. quill itself seems to be a solution looking for a problem, as I cannot think of an occasion where a designer would be designing gestures without being expert enough to check the distinguishability of gestures on their own. Also the question of when to alert users to errors seems to me to be a well covered problem that doesn't really need to be addressed in this paper.


Monday, September 6, 2010

624 #2 Specifying Gestures By Examples: Dean Rubine

Rubine specifies a technique for gesture recognition using 13 features of the gesture. Using these 13 features with a linear classifier it can generally recognize a gesture to 90% accuracy with 15 pre-defined examples of each class. These gestures must have a known start and stop point to be able to be classified, so only individual parts of a sketch can be used.

It will be interesting to implement Rubine's but what will be more interesting is to use some of the extensions discussed at the end of the paper. The most interesting will be implementing this with multitouch to create my own multifinger gesture recognition.

Thursday, September 2, 2010

624 #1 Gesture Recognition: Tracy Hammond

This paper is an overview of a few techniques for basic pen gesture techniques. Note that these techniques are generally only good for recognizing a single stroke gesture and nothing more complex than that.

The first technique introduced is Rubines features. These features can distinguish gestures classes by using around 15 training examples. Rubines technique uses linear classifiers to compare a new gesture to all known classes and can distinguish between similar objects at different orientations. Long's feature set can be added to Rubines, but they do not add much additional capability at the cost of additional complexity and so they are not often used.

The second technique addressed is that of Wobbrock, the "$1 Gesture Recognizer". Wobbrock's method is simpler than Rubine's, however it is slower and unable to differentiate gestures that have been rotated or stretched.

Tuesday, August 31, 2010

624 #0 Self Introduction

Email: eyce9000 at gmail dot com
Standing: 1st Year MSCS
Taking this class: Because sketch recognition/machine learning are things I want to use in projects.
Experience: Not much directly useful to SR, but I am a sharp cookie (mmm cookies)
Doing in 10 years: Living somewhere by the sea, making awesome gizmos
Next Tech Advance: Memresistors!
Favorite Undergrad Course: Figure Drawing.
Favorite Movie: Kung Fu Panda, mostly because it has great character animation and because Jack Black is awesome.
Time Traveler: I would travel back to meet Steve Jobs and Wozniak right when they started Apple. Then I would go meet Bill Gates when he started Microsoft.
Interesting Fact: I lived in Italy when I was little, and got engaged this summer!