Wednesday, 20 September 2017

Assignment 13: Reading 9 - Long Gestures

Bibliography:
A. Chris Long, Jr., James A. Landay, Lawrence A. Rowe, and Joseph Michiels. Visual Similarity of Pen Gestures, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '91), 1991

Summary:
The focus of this paper is on coming up with a computational model that helps measure the 'goodness' of a gesture. The 'goodness' is defined by a combination of similarity to other gestures, ease of learning, remembering etc. The authors hope this would help designers design better gestures, using feedback from a tool that gives the 'goodness'.

The paper first describes a set of pen-based devices such as Apple Newton MessagePad and 3Com PalmPilot. Pen gestures have been found to perform better than keyboard commands for a variety of desktop and other applications. Experiements in perceptual similarity revealed that logarithm of quantitative metrics was found to correlate with similarity. The authors use a Multi dimensional scaling system called INDSCAL to reduce the dimensionality of data and identify patterns in the data by viewing plots.

In their first experiment, the authors presented users with triads of figures, and asked them to identify the one that was most different. Using MDS plots and regression analysis, the authors identified geometric properties that influenced perceived similarity and designed a model of gesture similarity that could predict how similar people would perceive a gesture. It was found that short and wide gestures were perceived to be very similar to narrow and tall ones. It was also found that different people perceived similarity using different features.

In their second experiement, the authors systematically varied features and saw how that affected perceived similarity. It was found that gestures whose lines were horizontal and vertical were perceived as more similar than ones whose components were diagonal.

The authors tested their models developed from trial 1 and trial 2 with data from the other model, model 1 was found to perform slightly better than model 2. The authors conclude that human perception is very complicated, but a small number of features were enough to identify the three most saliant dimensions. The authors hope their work will encourage research in gesture similarity, memorability and learnability.

Discussion:
This paper compliments the Rubines features paper nicely. While rubines features talks about what features help a gesture recognizer classify gestures, this paper focusses on what makes it easy for humans to learn and perceive gestures.

I've always thought that designers come up with gestures/ui capabilities based on their intuition and expensive extensive user studies. I feel having a tool that can compute 'good gestures' would revolutionize the UI design field.

Since this is a fairly old paper, I wonder what the state of the art is, in this area. Certainly, I have seen UI designers go with designs only based on intuition and A/B testing in recent days. Hope a system like this already out there or close to being out there!

No comments:

Post a Comment