Posts

Showing posts from 2011

Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations

Image
Reference Authors: Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen Affiliation: IBM Research, 1 Rogers St, Cambridge, MA 02142 Presentation: IUI 2011, February 13–16, 2011, Palo Alto, California, USA. Summary Hypothesis Research on intelligent systems has emphasized the benefits of providing explanations along with recommendations. But can explanations lead users to make incorrect decisions? This paper explored this question in a controlled experimental study with 18 professional network security analysts doing an incident classification task using a prototype cyber security system. It further addresses the role of explanations in the context of correct and incorrect recommendations, in a mission- critical setting. Methods Nineteen analysts with minimum of three years experience participated in the study on the effects of a user's response to a recommendation made by an intelligent system . Following the training, analysts ...

Paper Reading #31: Identifying Emotional States using Keystroke Dynamics

Image
Reference Authors : Clayton Epp, Michael Lippold, and Regan L. Mandryk Affiliations: Department of Computer Science, University of Saskatchewan, Saskatoon , Canada Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis Given the two main problems with current system approaches for identifying emotions that limit their applicability: they can be invasive and can require costly equipment, the authors present a solution to determine user emotion by analyzing the rhythm of their typing patterns on a standard keyboard. Contents There have been various methods of evaluating emotional activities that have seen varying rates of success, but they still exhibit one or both of two main problems preventing wide- scale use: they can be intrusive to the user, and can require specialized equipment that is expensive and not found in typical home or office environments. This system using key strokes is more intuitive, unobtrusive and has a w...

Paper Reading #30: Life “Modes” in Social Media

Image
Reference Authors and Affiliations: Fatih Kursat Ozenc, Carnegie Mellon UniversitySchool of Design,5000 Forbes Avenue, Pittsburgh, PA Shelly D. Farnham, Yahoo!,701 First Avenue Sunnyvale, CA Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis This paper hypothesize that people organize their social worlds based on life ‘modes’, i.e., family, work and social. People strategically use communication technologies to manage intimacy levels within these modes, and levels of permeability through the boundaries between these modes.It also emphasizes strong need for focused sharing – the ability to share content only with appropriate audiences within certain areas of life. Contents The paper explores how to leverage natural models of social organization from people’s lives to improve their experiences of social media streams. The performed design research project examines how people might best a) organize their online content, b) transition between...

Paper Reading #29 : Usable Gestures for Blind People: Understanding Preference and Performance

Image
Reference Authors and Affiliations: Shaun K. Kane, Jacob O. Wobbrock - The Information School, DUB Group University of Washington Seattle, WA 98195 USA Richard E. Ladner - Computer Science & Engineering, DUB Group University of Washington, Seattle, WA 98195 USA Presentation :CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper suggests that the blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. They hypothesize differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Contents Accessible touch screens still present challenges to both users and designers. Users must be able to learn new touch screen applications quickly and effectively, while designers must be able to implement accessible touch screen interaction techniques for a diverse range of devices and application...

Paper Reading #28: Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments

Image
Reference Authors and Affiliations : Andrew Bragdon,Brown University Providence, RI, USA Eugene Nelson, Brown University Providence, RI, USA Yang Li,Google Research Mountain View, CA, USA Ken Hinckley, Microsoft Research Redmond, WA, USA Presentation:CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis The paper hypothesizes that u nder various levels of environmental demands on attention, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user‟s attention is focused on the phone. Furthermore, they propose that the speed and accuracy of bezel gestures will not be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Contents The authors examine four factors: moding technique(hard button, soft button and bezel based), gesture type(mark based and free form), user‟s motor activity(sitting and walking), and distraction l...

Paper Reading #27 : Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface

Image
Reference Authors and Affiliations: Erin Treacy Solovey,Francine Lalooses,Krysta Chauncey, Douglas Weaver,Margarita Parasi,Matthias Scheutz,Robert J.K. Jacob at Tufts University, Computer Science,161 College Ave., Medford, MA Angelo Sassaroli, Sergio Fantini, at Tufts University Biomedical Engineering 4 Colby St., Medford, MA Paul Schermerhorn,Indiana University Cognitive Science Bloomington, IN Audrey Girouard,Queens University School of Computing 25 Union St., Kingston, ON K7L 3N6 Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper describes two experiments leading toward a system that detects cognitive multitasking processes and uses this information as input to an adaptive interface. Authors then present a human-robot system as a proof-of-concept that uses real-time cognitive state information as input and adapts in response. They hypothesize that the prototype system serves as ...

Paper Reading #26 : Embodiment in Brain-Computer Interaction

Image
Reference Authors: Kenton O’Hara, Abigail Sellen, Richard Harper Affiliations : Microsoft Research, 7 J J Thomson Avenue Cambridge, UK Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper highlights the importance of considering the body in BCI and not simply what is going on in the head. They hypothesize that people use bodily actions to facilitate control of brain activity but also to make their actions and intentions visible to, and interpretable by, others playing and watching the game and hence allows those action to be socially organised, understood and coordinated with others and through which social relationships can be played out. Contents The paper discusses findings from a real world study of a BCI-controlled game played as a social activity in the home. This study draws on the philosophical and analytic concerns found in CSCW where techniques for analysing the social in all its embodied forms are well developed. For...

Paper Reading #25 :TwitInfo: Aggregating and Visualizing Microblogs for Event Exploration

Image
Reference Authors: Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, Robert C. Miller Affiliation: MIT CSAIL,32 Vassar St., Cambridge MA Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis For people trying to understand events by querying services like Twitter, a chronological log of posts makes it very difficult to get a detailed understanding of an event.This paper presents TwitInfo, a system for visualizing and summarizing events on Twitter. Contents TwitInfo allows users to browse a large collection of tweets using a timeline-based display that highlights peaks of high tweet activity. A novel streaming algorithm automatically discovers these peaks and labels them meaningfully using text from the tweets. Users can drill down to subevents, and explore further via geolocation, sentiment, and popular URLs. They present an algorithm and an user interface as the TwitInfo system. An evaluation of the TwitInfo system revealed th...

Paper Reading #24: Gesture Avatar: A Technique for Operating Mobile User Interfaces Using Gestures

Image
Reference Authors and Affiliations Hao Lü, Computer Science and Engineering DUB Group, University of Washington Seattle, WA 98195 Yang Li, Google Research,1600 Amphitheatre Parkway Mountain View, CA 94043 Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis The paper presents gesture avatar and hypothesizes the following about it performance. 1)Gesture Avatar will be slower than Shift on larger targets, but faster on small targets. 2) Gesture Avatar will have fewer errors than Shift. 3) Mobile situations such as walking will decrease the time performance and increase the error rate of Shift, but have little influence on Gesture Avatar. Contents The paper presents Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It is hypothesized to leverage the visibility of graphical user interfaces and the casual interaction of gestures. Due to the low precision of finger input, small u...

Paper Reading #23 :User-Defined Motion Gestures for Mobile Interaction

Image
Reference Authors and Affiliations: Jaime Ruiz,University of Waterloo Waterloo, ON, Canada Yang Li,Google Research Mountain View, CA, USA Edward Lank,University of Waterloo Waterloo, ON, Canada Presentation:CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis The paper presents the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device.They demonstrate that consensus exists on parameters of movement and on mappings of motion gestures onto commands.It is used to develop a taxonomy for motion gestures and to specify an end-user inspired motion gesture set. Methods The authors elicited input from 20 participants asking them to design and perform a motion gesture with a smartphone device (a cause) that could be used to execute a task on the smartphone (an effect). Nineteen tasks were presented to the participants during the study . Participants used the think- aloud protocol and supplied subjective preference ratings f...

Paper Reading #22 :Mid-air Pan-and-Zoom on Wall-sized Displays

Image
Reference Authors : Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay Affiliations: LRI - Univ Paris-Sud & CNRS,INRIA ,F-91405 Orsay, France Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Contents As complex tasks such as pan-zoom navigation have received little attention on large sized high resolution display,building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, the paper identifies three key factors for the design of mid-air pan-and-zoom techniques;uni- vs. bi- manual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. 32 LCDs each 30 inch diagonal were used to create the display and the software was ZVTM toolkit run on Mac OS X. Hypothesis The paper presents the following seven hypotheses Two- handed gestures will be faster than one-handed gestures because panning and zooming are complementary actions, integra...

Paper Reading #21 :Human Model Evaluation in Interactive Supervised Learning

Image
Reference Authors: Rebecca Fiebrink,Perry R. Cook( Department of Computer Science and Department of Music) and Daniel Trueman , Department of Music, Princeton University,Princeton, New Jersey, USA Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis The paper presents study of the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. The authors examine users’ model evaluation criteria, which spans over conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness. Contents The researchers develop and test a software tool, called the Wekinator, that implements basic elements of supervised learning in a machine learning environment to recognize physical gestures and label them as a certain input. This application is particularly chosen because gesture modeling is one of the most common applications of machine learning and as well music nat...

Paper Reading #20: The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only ANOVA Procedures

Image
Reference Authors and Affiliations: Leah Findlater,Jacob O. Wobbrock ,The Information School DUB Group University of Washington Seattle, WA 98195 USA Darren Gergle, School of Communication Northwestern University Evanston, IL 60208 USA James J. Higgins,Department of Statistics Kansas State University Manhattan, KS 66506 USA Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis: The paper present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. They propose a preprocessing step that “aligns” data before applying averaged ranks, after which point, common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. They hypothesize that researchers only familiar with ANOVA can use, interpret, and report results from the ART. Contents Conover and Iman’s Rank Transform (RT) uses the parametric F-test on the ranks, resulting in a nonparametric factorial procedu...