Paper Reading #23 :User-Defined Motion Gestures for Mobile Interaction


Reference
Authors and Affiliations:
Jaime Ruiz,University of Waterloo Waterloo, ON, Canada
Yang Li,Google Research Mountain View, CA, USA
Edward Lank,University of Waterloo Waterloo, ON, Canada
Presentation:CHI 2011, May 7–12, 2011, Vancouver, BC, Canada.

Summary
Hypothesis
The paper presents the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device.They demonstrate that consensus exists on parameters of movement and on mappings of motion gestures onto commands.It is used to develop a taxonomy for motion gestures and to specify an end-user inspired motion gesture set.
Methods
The authors elicited input from 20 participants asking them to design and perform a motion gesture with a smartphone device (a cause) that could be used to execute a task on the smartphone (an effect). Nineteen tasks were presented to the participants during the study . Participants used the think- aloud protocol and supplied subjective preference ratings for each gesture.
Participants were asked to perform each gesture five times on cue and then rate the gesture using a seven-point Likert scale.
The session was video recorded and custom software running on the phone recorded the data stream generated from the accelerometer.For each participant, a transcript of the recorded video was created to extract individual quotes and classify and label each motion gesture designed by the participant. The quotes were then clustered to identify common themes using a bottom-up, inductive analysis approach.
Results
Participants were able to relate interacting with the mobile phone to interacting with a physical object, the gesture they designed consistently mimicked the use of a non-smartphone object.
Gestures that mimicked motions occurring during normal use of the phone were often perceived being both a better fit to the task and easier to perform and there was consensus among people as well.
They were able to show that the preference of a participants while navigating depends on the plane in which they are interacting. Participants showed the need of feedback in form of audio to know what they were doing.

Contents
After doing the study on different motion gestures as described above the researchers present a taxonomy for different motion gestures as shown in the table below.








Discussion
The paper was well organized and easy to follow through and grasp. The paper primarily focused on the performing real tests and getting qualitative and quantitative data, hence, it was a really high standard of research and the results were impressive to me. I think the taxonomy may vary as per devices used and as new forms of design come into existence. The research should have a lot of applications if it can be streamlined into use in daily use smartphones, but I feel like the new application will need some time for people to be familiar with . So, immediate success might not be seen.

Comments

Popular posts from this blog

Paper Reading #27 : Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface

Paper Reading #29 : Usable Gestures for Blind People: Understanding Preference and Performance

Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations