Posts

Paper Reading #32: Taking advice from intelligent systems: the double-edged sword of explanations

Image
Reference Authors: Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen Affiliation: IBM Research, 1 Rogers St, Cambridge, MA 02142 Presentation: IUI 2011, February 13–16, 2011, Palo Alto, California, USA. Summary Hypothesis Research on intelligent systems has emphasized the benefits of providing explanations along with recommendations. But can explanations lead users to make incorrect decisions? This paper explored this question in a controlled experimental study with 18 professional network security analysts doing an incident classification task using a prototype cyber security system. It further addresses the role of explanations in the context of correct and incorrect recommendations, in a mission- critical setting. Methods Nineteen analysts with minimum of three years experience participated in the study on the effects of a user's response to a recommendation made by an intelligent system . Following the training, analysts ...

Paper Reading #31: Identifying Emotional States using Keystroke Dynamics

Image
Reference Authors : Clayton Epp, Michael Lippold, and Regan L. Mandryk Affiliations: Department of Computer Science, University of Saskatchewan, Saskatoon , Canada Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis Given the two main problems with current system approaches for identifying emotions that limit their applicability: they can be invasive and can require costly equipment, the authors present a solution to determine user emotion by analyzing the rhythm of their typing patterns on a standard keyboard. Contents There have been various methods of evaluating emotional activities that have seen varying rates of success, but they still exhibit one or both of two main problems preventing wide- scale use: they can be intrusive to the user, and can require specialized equipment that is expensive and not found in typical home or office environments. This system using key strokes is more intuitive, unobtrusive and has a w...

Paper Reading #30: Life “Modes” in Social Media

Image
Reference Authors and Affiliations: Fatih Kursat Ozenc, Carnegie Mellon UniversitySchool of Design,5000 Forbes Avenue, Pittsburgh, PA Shelly D. Farnham, Yahoo!,701 First Avenue Sunnyvale, CA Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis This paper hypothesize that people organize their social worlds based on life ‘modes’, i.e., family, work and social. People strategically use communication technologies to manage intimacy levels within these modes, and levels of permeability through the boundaries between these modes.It also emphasizes strong need for focused sharing – the ability to share content only with appropriate audiences within certain areas of life. Contents The paper explores how to leverage natural models of social organization from people’s lives to improve their experiences of social media streams. The performed design research project examines how people might best a) organize their online content, b) transition between...

Paper Reading #29 : Usable Gestures for Blind People: Understanding Preference and Performance

Image
Reference Authors and Affiliations: Shaun K. Kane, Jacob O. Wobbrock - The Information School, DUB Group University of Washington Seattle, WA 98195 USA Richard E. Ladner - Computer Science & Engineering, DUB Group University of Washington, Seattle, WA 98195 USA Presentation :CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper suggests that the blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. They hypothesize differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Contents Accessible touch screens still present challenges to both users and designers. Users must be able to learn new touch screen applications quickly and effectively, while designers must be able to implement accessible touch screen interaction techniques for a diverse range of devices and application...

Paper Reading #28: Experimental Analysis of Touch-Screen Gesture Designs in Mobile Environments

Image
Reference Authors and Affiliations : Andrew Bragdon,Brown University Providence, RI, USA Eugene Nelson, Brown University Providence, RI, USA Yang Li,Google Research Mountain View, CA, USA Ken Hinckley, Microsoft Research Redmond, WA, USA Presentation:CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Summary Hypothesis The paper hypothesizes that u nder various levels of environmental demands on attention, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user‟s attention is focused on the phone. Furthermore, they propose that the speed and accuracy of bezel gestures will not be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Contents The authors examine four factors: moding technique(hard button, soft button and bezel based), gesture type(mark based and free form), user‟s motor activity(sitting and walking), and distraction l...

Paper Reading #27 : Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface

Image
Reference Authors and Affiliations: Erin Treacy Solovey,Francine Lalooses,Krysta Chauncey, Douglas Weaver,Margarita Parasi,Matthias Scheutz,Robert J.K. Jacob at Tufts University, Computer Science,161 College Ave., Medford, MA Angelo Sassaroli, Sergio Fantini, at Tufts University Biomedical Engineering 4 Colby St., Medford, MA Paul Schermerhorn,Indiana University Cognitive Science Bloomington, IN Audrey Girouard,Queens University School of Computing 25 Union St., Kingston, ON K7L 3N6 Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper describes two experiments leading toward a system that detects cognitive multitasking processes and uses this information as input to an adaptive interface. Authors then present a human-robot system as a proof-of-concept that uses real-time cognitive state information as input and adapts in response. They hypothesize that the prototype system serves as ...

Paper Reading #26 : Embodiment in Brain-Computer Interaction

Image
Reference Authors: Kenton O’Hara, Abigail Sellen, Richard Harper Affiliations : Microsoft Research, 7 J J Thomson Avenue Cambridge, UK Presentation: CHI 2011, May 7–12, 2011, Vancouver, BC, Canada Summary Hypothesis The paper highlights the importance of considering the body in BCI and not simply what is going on in the head. They hypothesize that people use bodily actions to facilitate control of brain activity but also to make their actions and intentions visible to, and interpretable by, others playing and watching the game and hence allows those action to be socially organised, understood and coordinated with others and through which social relationships can be played out. Contents The paper discusses findings from a real world study of a BCI-controlled game played as a social activity in the home. This study draws on the philosophical and analytic concerns found in CSCW where techniques for analysing the social in all its embodied forms are well developed. For...