Paper Reading #10: Sensing Foot Gestures from the Pocket


Reference
Authors: Jeremy Scott(Assistant Professor), David Dearman(Phd Student), Koji Yatani(Phd Student), and Khai N. Truong(Assistant Professor)
Affiliation: University of Toronto, Canada
Presentation: UIST' 10, October 3-6, 2010 New York City, New York, USA

Summary
Hypothesis/Contents
Mobile phones have been norm of modern technology but they way we interact with them are visually demanding, and they take a lot of attention and mental concentration while working with them. The paper hypothesizes that phones' placement in pockets which can allow for foot movements as an alternative to eyes-and-hands input gesture is less demanding for interacting with the device and serves a novice method of interaction. They then present a system which can identify 10 different gestures with 86% accuracy.

Methods
The first user study was done among 16 right footed individuals to find the capabilities and the limitations of the foot based interaction.They examined foot gestures using a target selection task that required participants to select discrete targets along three axes of foot rotation.M-Series Vicon Motion Capture cameras were used to accurately capture and log the movement of a participant’s foot. The selection and confirmation of targets presented on a laptop worked with the click and release of the mouse.
A different set of study among 6 right footed participants used machine learning to identify foot gestures.Based on the design implications from the first study,they focused specifically on plantar flexion and heel rotation gestures. Bayes theorem was used to perform classification of users foot movement.For each gesture type, each target was randomly presented 10 times for training and 50 times for testing. In total, each participant completed 100 gestures in the practice phase and 500 gestures in the testing phase. They wore jeans with pockets to hold iPhones.Four classification tests: across all gestures, across all gesture types, across all gestures in Heel rotation, and across all gestures in Plantar flexion were conducted.

Results
Plantar flexion as opposed to dorsiflexion was a more accurate way to get foot gestures. The accuracy might have been affected by the tiredness felt by people as they preferred toe rotation gestures rather than gestures which require lifting the toes and as well a greater comfort for larger range of motion. Except for the dorsiflexion, participants tended to overshoot the small angular targets more frequently than the large angular targets. The post-experimental interview, participants identified heel rotation as the most comfortable gesture
The side placement of the mobile device resulted in a greater overall accuracy than the other front and back pocket placements.Classification of gesture type without considering the target angles resulted in considerably higher classification accuracy across the front and back-pocketed iPhones.

Discussion

The research has let us to more options of interacting with phones but I am not sure whether it will have that much of impact on regular users given the complexity and limitations of gestures recognition. However I feel like it can act as a means of interaction where visual and audio cues are suppressed, may be in military applications, may be in applications where we need to communicate using haptic feedback. The thing is limited in the sense that phones need pockets to be put on and can be put on certain places only.

Comments

Popular posts from this blog

Paper Reading #20: The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures

Paper Reading #29 : Usable Gestures for Blind People: Understanding Preference and Performance

Paper Reading #27 : Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface