Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-23T07:09:26.944Z Has data issue: false hasContentIssue false

Grasping gestures: Gesturing with physical artifacts

Published online by Cambridge University Press:  11 July 2011

Elise van den Hoven
Affiliation:
User-Centered Engineering Group, Industrial Design Department, Eindhoven University of Technology, Eindhoven, The Netherlands
Ali Mazalek
Affiliation:
Synaesthetic Media Lab, Digital Media Program, Georgia Institute of Technology, Atlanta, Georgia, USA

Abstract

Gestures play an important role in communication. They support the listener, who is trying to understand the speaker. However, they also support the speaker by facilitating the conceptualization and verbalization of messages and reducing cognitive load. Gestures thus play an important role in collaboration and also in problem-solving tasks. In human–computer interaction, gestures are also used to facilitate communication with digital applications, because their expressive nature can enable less constraining and more intuitive digital interactions than conventional user interfaces. Although gesture research in the social sciences typically considers empty-handed gestures, digital gesture interactions often make use of hand-held objects or touch surfaces to capture gestures that would be difficult to track in free space. In most cases, the physical objects used to make these gestures serve primarily as a means of sensing or input. In contrast, tangible interaction makes use of physical objects as embodiments of digital information. The physical objects in a tangible interface thus serve as representations as well as controls for the digital information they are associated with. Building on this concept, gesture interaction has the potential to make use of the physical properties of hand-held objects to enhance or change the functionality of the gestures made. In this paper, we look at the design opportunities that arise at the intersection of gesture and tangible interaction. We believe that gesturing while holding physical artifacts opens up a new interaction design space for collaborative digital applications that is largely unexplored. We provide a survey of gesture interaction work as it relates to tangible and touch interaction. Based on this survey, we define the design space of tangible gesture interaction as the use of physical devices for facilitating, supporting, enhancing, or tracking gestures people make for digital interaction purposes, and outline the design opportunities in this space.

Type
Special Issue Articles
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Alibali, M.W., & DiRusso, A.A. (1999). The function of gesture in learning to count: more than keeping track. Cognitive Development 14(1), 3756.CrossRefGoogle Scholar
Bakker, S., Antle, A.N., & Hoven, E.v.d. (in press). Embodied metaphors in tangible interaction design. Personal and Ubiguitous Computing.Google Scholar
Bartlett, J.F. (2000). Rock ‘n’ scroll is here to stay. IEEE Computer Graphics and Applications 20(3), 4045.CrossRefGoogle Scholar
Bates, E., & Dick, F. (2002). Language, gesture, and the developing brain. Developmental Psychobiology 40, 293310.CrossRefGoogle ScholarPubMed
Baudel, T., & Beaudouin-Lafon, M. (1993). Charade: remote control of objects using free-hand gestures. Communications of the ACM 36(7), 2835.CrossRefGoogle Scholar
Bogost, I. (2009). Persuasive games: gestures as meaning. Gamasutra. Accessed at http://www.gamasutra.com/view/feature/4064/persuasive_games_gestures_as_.php on December 24, 2010.Google Scholar
Bolt, R.A. (1980). “put-that-there”: voice and gesture at the graphics interface. Proc. 7th Annual Conf. Computer Graphics and Interactive Techniques (SIGGRAPH ’80), pp. 262270. New York: ACM.Google Scholar
Borchers, J.O. (1997). Worldbeat: designing a baton-based interface for an interactive music exhibit. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ’97), pp. 131138. New York: ACM.Google Scholar
Butterworth, B., & Beattie, G. (1978). Gesture and silence as indicators of planning in speech. In Recent Advances in the Psychology of Language: Formal and Experimental Approaches (Campbell, R., & Smith, P.T., Eds.), pp. 347360. London: Plenum.CrossRefGoogle Scholar
Buxton, W. (2001). Less is more (more or less). In The Invisible Future: The Seamless Integration of Technology in Everyday Life (Denning, P., Ed.), pp. 145179. New York: McGraw–Hill.Google Scholar
Buxton, W., Fiume, E., Hill, R., Lee, A., & Woo, C. (1983). Continuous hand-gesture driven input. Proc. Graphics Interface ‘83, 9th Conf. Canadian Man–Computer Communications Society, pp. 191195.Google Scholar
Buxton, W., Sniderman, R., Reeves, W., Patel, S., & Baecker, R. (1985). The evolution of the sssp score-editing tools. In Foundations of Computer Music (Roads, C., & Strawn, J., Eds.), pp. 387392. Cambridge, MA: MIT Press.Google Scholar
Cao, X., & Balakrishnan, R. (2003). Visionwand: interaction techniques for large displays using a passive wand tracked in 3D. Proc. 16th Annual ACM Symp. User Interface Software and Technology (UIST ‘03), pp. 173182. New York: ACM.Google Scholar
Cassell, J. (1998). A framework for gesture generation and interpretation. In Computer Vision in Human–Machine Interaction (Cipolla, R., & Pentland, A., Eds.), pp. 191215. New York: Cambridge University Press.Google Scholar
Cohen, P.R., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., & Clow, J. (1997). Mulitmodal interaction for distributed interactive simulation. Joint Proc. 14th National Conf. Artificial Intelligence and the 9th Conf. Innovative Applications of Artificial Intelligence, pp. 978985. New York: AAAI Press.Google Scholar
Corradini, A., Wesson, R.M., & Cohen, P.R. (2002). A map-based system using speech and 3D gestures for pervasive computing. Proc. 4th IEEE Int. Conf. Multimodal Interfaces, pp. 191196. New York: IEEE.Google Scholar
de Jorio, A. (2000). Gesture in Naples and Gesture in Classical Antiquity. Bloomington, IN: Indiana University Press.Google Scholar
Djajadiningrat, T., Matthews, B., & Stienstra, M. (2007). Easy doesn't do it: skill and expression in tangible aesthetics. Personal and Ubiquitous Computing 11(8), 657676.CrossRefGoogle Scholar
Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Efron, D. (1941). Gesture and Environment. Morningside Heights, NY: King's Crown Press.Google Scholar
Ellis, T.O., Heafner, J.F., & Sibley, W.L. (1969). The GRAIL Language and Operations, Technical Report RM-6001-ARPA. New York: Rand Corporation.CrossRefGoogle Scholar
Everitt, K., Shen, C., Ryall, K., & Forlines, C. (2005). Modal spaces: spatial multiplexing to mediate direct-touch input on large displays. Proc. CHI ‘05 Extended Abstracts on Human Factors in Computing Systems, pp. 13591362. New York: ACM.CrossRefGoogle Scholar
Fagerberg, P., Ståhl, A., & Höök, K. (2003). Designing gestures for affective input: an analysis of shape, effort and valence. Proc. 2nd Int. Conf. Mobile and Ubiquitous Multimedia (MUM ‘03), pp. 5765.Google Scholar
Fels, S.S., & Hinton, G.E. (1993). Glove-Talk: a neural network interface between a data-glove and a speech synthesizer. IEEE Transactions on Neural Networks 4, 28.CrossRefGoogle Scholar
Ferscha, A., Resmerita, S., Holzmann, C., & Reichor, M. (2005). Orientation sensing for gesture-based interaction with smart artifacts. Computer Communications 28, 15521563.CrossRefGoogle Scholar
Fitzmaurice, G.W. (1993). Situated information spaces and spatially aware palmtop computers. Communications of the ACM 36(7), 3949.CrossRefGoogle Scholar
Fögen, T. (2009). Sermo corporis: ancient reflections on gestus, vultus and vox. In Bodies and Boundaries in Graeco–Roman Antiquity (Fögen, T., & Lee, M.M., Eds.), pp. 1544. Berlin: Walter de Gruyter.CrossRefGoogle Scholar
Frei, P., Su, V., Mikhak, B., & Ishii, H. (2000). Curlybot: designing a new class of computational toys. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ‘00), pp. 129136. New York: ACM.CrossRefGoogle Scholar
Geißler, J. (1998). Shuffle, throw or take it! Working efficiently with an interactive wall. Proc. CHI 98 Conf. Summary on Human Factors in Computing Systems (CHI ‘98), pp. 265266. New York: ACM.Google Scholar
Graham, J.A., & Argyle, M. (1975). A cross-cultural study of the communication of extra-verbal meaning by gestures. International Journal of Psychology 10(1), 5767.CrossRefGoogle Scholar
Greimas, A.J. (1970). Du Sens. Paris: Seuil.Google Scholar
Grosjean, M., & Kerbrat-Orecchioni, C. (2002). Acte verbal et acte non verbal: ou comment le sens vient aux actes. In Int. Colloquium Les Relations Intersémiotiques, December 16–18, 1999, University Lumière Lyon 2.Google Scholar
Grossman, T., Wigdor, D., & Balakrishnan, R. (2004). Multi-finger gestural interaction with 3D volumetric displays. Proc. 17th Annual ACM Symp. User Interface Software and Technology (UIST ‘04), pp. 6170. New York: ACM.Google Scholar
Harrison, B.L., Fishkin, K.P., Gujar, A., Mochon, C., & Want, R. (1998). Squeeze me, hold me, tilt me! An exploration of manipulative user interfaces. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ‘98), pp. 1724. New York: ACM/Addison–Wesley.Google Scholar
Harwin, W.S., & Jackson, R.D. (1990). Analysis of intentional head gestures to assist computer access by physically disabled people. Journal of Biomedical Engineering 12, 193198.CrossRefGoogle ScholarPubMed
Hauptmann, A.G. (1989). Speech and gestures for graphic image manipulation. SIGCHI Bulletin 20, 241245.CrossRefGoogle Scholar
Haviland, J.B. (2006). Gesture: sociocultural analysis. In Encyclopedia of Language and Linguistics (Brown, D., Ed.), pp. 6671. New York: Elsevier Ltd.CrossRefGoogle Scholar
Heidegger, M. (1962). Being and Time. New York: Harper & Row.Google Scholar
Hinckley, K. (2003 a). Input technologies and techniques. In The Human–Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (Jacko, J.A., & Sears, A., Eds.), pp. 151168. Hillsdale, NJ: Erlbaum.Google Scholar
Hinckley, K. (2003 b). Synchronous gestures for multiple persons and computers. Proc. 16th Annual ACM Symp. User Interface Software and Technology (UIST ‘03), pp. 149158. New York: ACM.Google Scholar
Hinckley, K., Pausch, R., Goble, J.C., & Kassell, N.F. (1994 a). Passive real-world interface props for neurosurgical visualization. Proc. SIGCHI Conf. Human Factors in Computing Systems: Celebrating Interdependence (CHI ‘94), pp. 452458. New York: ACM.CrossRefGoogle Scholar
Hinckley, K., Pausch, R., Goble, J.C., & Kassell, N.F. (1994 b). A survey of design issues in spatial input. Proc. 7th Annual ACM Symp. User Interface Software and Technology (UIST ‘94), pp. 213222. New York: ACM.Google Scholar
Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., & Smith, M. (2004). Stitching: pen gestures that span multiple displays. Proc. Working Conf. on Advanced Visual interfaces (AVI ’04), pp. 2331. New York: ACM.CrossRefGoogle Scholar
Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P., Beigl, M., & Gellersen, H. (2001). Smart-Its Friends: a technique for users to easily establish connections between smart artefacts. Proc. 3rd Int. Conf. Ubiquitous Computing, pp. 116122. London: Springer–Verlag.Google Scholar
Höök, K. (2009). Affective loop experiences: designing for interactional embodiment. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 364, 35853595.CrossRefGoogle ScholarPubMed
Hoven, E.v.d., & Eggen, B. (2004). Tangible computing in everyday life: extending current frameworks for tangible user interfaces with personal objects. Proc. EUSAI 2004, pp. 230242. Berlin: Springer.Google Scholar
Hummels, C. (2000). Gestural design tools: prototypes, experiments and scenarios. PhD Thesis. Technische Universiteit Delft.Google Scholar
Iverson, J., & Goldin-Meadow, S. (1997). What's communication got to do with it: gesture in blind children. Developmental Psychology 33, 453467.CrossRefGoogle Scholar
Johnson, M.P., Wilson, A., Blumberg, B., Kline, C., & Bobick, A. (1999). Sympathetic interfaces: using a plush toy to direct synthetic characters. Proc. SIGCHI Conf. Human Factors in Computing Systems: The CHI is the Limit (CHI ‘99), pp. 152158. New York: ACM.CrossRefGoogle Scholar
Kela, J., Korpipää, P., Mäntyjärvi, J., Kallio, S., Savino, G., Jozzo, L., & Marca, D. (2006). Accelerometer-based gesture control for a design environment. Personal and Ubiquitous Computing 10(5), 285299.CrossRefGoogle Scholar
Kendon, A. (1986). Current issues in the study of gesture. In The Biological Foundations of Gestures: Motor and Semiotic Aspects (Nespoulous, J.-L., Peron, P., & Lecours, A.R., Eds.), pp. 2347. Hillsdale, NJ: Erlbaum.Google Scholar
Kendon, A. (1988). How gestures can become like words. In Crosscultural Perspectives in Nonverbal Communication (Poyatos, F., Ed.), pp. 131141. Toronto: C.J. Hogrefe.Google Scholar
Kendon, A. (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Kerr, D.R., Bronstein, C., Low, W.K., & King, N.V. (2008). 3D remote control system employing absolute and relative position detection. US Patent Application 2008/0106517 A1, May 2008.Google Scholar
Kim, M.J., & Maher, M.L. (2008). The impact of tangible user interfaces on spatial cognition during collaborative design. Design Studies 29(3), 222253.CrossRefGoogle Scholar
Kirsh, D., & Maglio, P. (1994). On distinguishing epistemic from pragmatic actions. Cognitive Science 18(4), 513549.CrossRefGoogle Scholar
Klooster, S., & Overbeeke, C.J. (2005). Designing products as an integral part of choreography of interaction: the product's form as an integral part of movement. Proc. 1st European Workshop on Design and Semantics of Form and Movement (DeSForM 2005), pp. 2335.Google Scholar
Krueger, M.W., Gionfriddo, T., & Hinrichsen, K. (1985). VIDEOPLACE—an artificial reality. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ‘85), pp. 3540. New York: ACM.Google Scholar
Krueger, W., & Froehlich, B. (1994). The responsive workbench. IEEE Computer Graphics and Applications 14(3), 1215.CrossRefGoogle Scholar
Lee, H., Kim, H., Gupta, G., & Mazalek, A. (2008). WiiArts: creating collaborative art experience with Wii Remote interaction. Proc. 2nd Int. Conf. Tangible and Embedded Interaction (TEI ‘08), pp. 3336. New York: ACM.CrossRefGoogle Scholar
Lee, J., Lee, J., Kim, H., & Kim, J.-I. (2007). Gesture-based interactions on multiple large displays with a tabletop interface. Proc. 4th Int. Conf. Universal Access in Human–Computer Interaction: Ambient Interaction, pp. 936942. Beijing: Springer–Verlag.Google Scholar
Levin, G., & Yarin, P. (1999). Bringing sketching tools to keychain computers with an acceleration-based interface. Proc. CHI ‘99 Extended Abstracts on Human Factors in Computing Systems, pp. 268269. New York: ACM.Google Scholar
Lew, M. (2003). Office voodoo: a real-time editing engine for an algorithmic sitcom. Proc. ACM SIGGRAPH 2003 Sketches & Applications, p. 1-1. New York: ACM.Google Scholar
Lindeman, R.W., Sibert, J.L. & Hahn, J.K. (1999). Hand-held windows: towards effective 2D interaction in immersive virtual environments. Proc. IEEE Virtual Reality Conf. (VR ‘99), pp. 205212. New York: IEEE.Google Scholar
Linjama, J., & Kaaresoja, T. (2004). Novel, minimalist haptic gesture interaction for mobile devices. Proc. 3rd Nordic Conf. Human–Computer Interaction (NordiCHI ‘04), pp. 457458. New York: ACM.CrossRefGoogle Scholar
Lucero, A., Aliakseyeu, D., & Martens, J.B. (2007). Augmenting mood boards: flexible and intuitive interaction in the context of the design studio. Proc. 2nd Annual IEEE Int. Workshop on Horizontal Interactive Human–Computer Systems (TABLETOP ‘07), pp. 147154. New York: IEEE.Google Scholar
Lucero, A., Aliakseyeu, D., & Martens, J.B. (2008). Funky Wall: presenting mood boards using gesture, speech and visuals. Proc. Working Conf. Advanced Visual interfaces (AVI ‘08), pp. 425428. New York: ACM.CrossRefGoogle Scholar
Maes, P., Darrell, T., Blumberg, B., & Pentland, A. (1995). The alive system: full-body interaction with autonomous agents. Proc. Computer Animation ‘95, pp. 1118. New York: IEEE.CrossRefGoogle Scholar
Malik, S., Ranjan, A., & Balakrishnan, R. (2005). Interacting with large displays from a distance with vision-tracked multi-finger gestural input. Proc. 18th Annual ACM Symp. User Interface Software and Technology (UIST ‘05), pp. 4352. New York: ACM.Google Scholar
Marrin, T. (1997). Possibilities for the digital baton as a general-purpose gestural interface. Proc. CHI ‘97 Extended Abstracts on Human Factors in Computing Systems: Looking to the Future, pp. 311312. New York: ACM.CrossRefGoogle Scholar
McNeill, D. (1992). Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press.Google Scholar
McNeill, D. (2005). Gesture and Thought. Chicago: University of Chicago Press.CrossRefGoogle Scholar
McNeill, D., & Levy, E. (1982). Conceptual representations in language activity and gesture. In Speech, Place and Action: Studies in Deixis and Related Topics (Jarvella, R., & Klein, W., Eds.), pp. 271295. New York: Wiley.Google Scholar
Merrill, D., Kalanithi, J., & Maes, P. (2007). Siftables: towards sensor network user interfaces. Proc. 1st Int. Conf. Tangible and Embedded Interaction, pp. 7578.CrossRefGoogle Scholar
Minsky, M.R. (1984). Manipulating simulated objects with real-world gestures using a force and position sensitive screen. Proc. 11th Annual Conf. Computer Graphics and interactive Techniques (SIGGRAPH ‘84), pp. 195203. New York: ACM.CrossRefGoogle Scholar
Mistry, P., & Maes, P. (2009). Sixthsense: a wearable gestural interface. Proc. ACM SIGGRAPH ASIA 2009 Sketches, Article 11, p. 1-1. New York: ACM.Google Scholar
Moran, T.P., Chiu, P., & van Melle, W. (1997). Pen-based interaction techniques for organizing material on an electronic whiteboard. Proc. 10th Annual ACM Symp. User Interface Software and Technology (UIST ’97), pp. 4554. New York: ACM.CrossRefGoogle Scholar
Morita, H., Hashimoto, S., & Ohteru, S. (1991). A computer music system that follows a human conductor. Computer 24(7), 4453.CrossRefGoogle Scholar
Morrel-Samuels, P., & Krauss, R.M. (1992). Word familiarity predicts the temporal asynchrony of hand gestures and speech. Journal of Experimental Psychology: Learning, Memory, and Cognition 18, 615622.Google Scholar
Morris, M.R., Huang, A., Paepcke, A., & Winograd, T. (2006). Cooperative gestures: multi-user gestural interactions for co-located groupware. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ’06), pp. 12011210. New York: ACM.CrossRefGoogle Scholar
Nespoulous, J., & Lecours, A.R. (1986). Gestures: Nature and function. In Biological Foundations of Gestures: Motor and Semiotic Aspects (Nespoulous, J.-L., Peron, P., & Lecours, A.R., Eds.), pp. 4962. Hillsdale, NJ: Erlbaum.Google Scholar
Oka, K., Sato, Y., & Koike, H. (2002). Real-time tracking of multiple fingertips and gesture recognition for augmented desk interface systems. Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, pp. 429434. New York: IEEE.Google Scholar
Oviatt, S., Cohen, P., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holtsman, T., Winograd, T., Landay, J., Larsen, J., & Ferro, D. (2000). Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions. Human–Computer Interaction 15(4), 263322.CrossRefGoogle Scholar
Paiva, A., Andersson, G., Höök, K., Mourão, D., Costa, M., & Martinho, C. (2002). SenToy in FantasyA: designing an affective sympathetic interface to a computer game. Personal and Ubiquitous Computing 6(5–6), 378389CrossRefGoogle Scholar
Pavlovic, V.I., Sharma, R., & Huang, T.S. (1997). Visual interpretation of hand gestures for human–computer interaction: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 677695.CrossRefGoogle Scholar
Pering, T., Anokwa, Y., & Want, R. (2007). Gesture connect: facilitating tangible interaction with a flick of the wrist. Proc. 1st Int. Conf. Tangible and Embedded Interaction (TEI ‘07), pp. 259262. New York: ACM.CrossRefGoogle Scholar
Premaratne, P., & Nguyen, Q. (2007). Consumer electronics control system based on hand gesture moment invariants. IET Computer Vision 1(1), 3541.CrossRefGoogle Scholar
Price, S., Sheridan, J.G., & Pontual Falcão, T. (2010). Action and representation in tangible systems: implications for design of learning interactions. Proc. 4th Int. Conf. Tangible, Embedded, and Embodied Interaction (TEI ‘10), pp. 145152. New York: ACM.Google Scholar
Quek, F.K.H. (1995). Eyes in the interface. Image and Vision Computing 13(6), 511525.CrossRefGoogle Scholar
Rekimoto, J. (1996). Tilting operations for small screen interfaces. Proc. 9th Annual ACM Symp. User Interface Software and Technology (UIST ‘96), pp. 167168. New York: ACM.CrossRefGoogle Scholar
Rekimoto, J. (2001). Gesturewrist and gesturepad: unobtrusive wearable interaction devices. Proc. 5th IEEE Int. Symp. Wearable Computers (ISWC‘01), pp. 2131. Washington, DC: IEEE.Google Scholar
Rekimoto, J. (2002). SmartSkin: an infrastructure for freehand manipulation on interactive surfaces. Proc. SIGCHI Conf. Human Factors in Computing Systems: Changing Our World, Changing Ourselves (CHI ‘02), pp. 113120. New York: ACM.CrossRefGoogle Scholar
Rekimoto, J., & Sciammarella, E. (2000). ToolStone: effective use of the physical manipulation vocabularies of input devices. Proc. 13th Annual ACM Symp. User Interface Software and Technology (UIST ‘00), pp. 109117. New York: ACM.Google Scholar
Rico, J., & Brewster, S. (2010). Usable gestures for mobile interfaces: evaluating social acceptability. Proc. 28th Int. Conf. Human Factors in Computing Systems (CHI ‘10), pp. 887896. New York: ACM.Google Scholar
Rimé, B., & Schiaratura, L. (1991). Gesture and speech. In Fundamentals of Nonverbal Behavior (Feldman, R.S., & Rimé, B., Eds.), pp. 239281. New York: Cambridge University Press.Google Scholar
Ringel, M., Berg, H., Jin, Y., & Winograd, T. (2001). Barehands: implement-free interaction with a wall-mounted display. Proc. CHI ’01 Extended Abstracts on Human Factors in Computing Systems, pp. 367368. New York: ACM.CrossRefGoogle Scholar
Ross, P., & Keyson, D.V. (2007). The case of sculpting atmospheres: towards design principles for expressive tangible interaction in control of ambient systems. Personal and Ubiquitous Computing 11(2), 6979.CrossRefGoogle Scholar
Rubine, D. (1991). Specifying gestures by example. Proc. 18th Annual Conf. Computer Graphics and Interactive Techniques (SIGGRAPH ‘91), pp. 329337. New York: ACM.Google Scholar
Saponas, T.S., Tan, D.S., Morris, D., Balakrishnan, R., Turner, J., & Landay, J.A. (2009). Enabling always-available input with muscle–computer interfaces. Proc. 22nd Annual ACM Symp. User interface Software and Technology (UIST ‘09), pp. 167176. New York: ACM.Google Scholar
Sato, Y., Saito, M., & Koik, H. (2001). Real-time input of 3D pose and gestures of a user's hand and its applications for HCI. Proc. IEEE Virtual Reality Conf. (VR ‘01), pp. 7986.Google Scholar
Schiphorst, T., Lovell, R., & Jaffe, N. (2002). Using a gestural interface toolkit for tactile input to a dynamic virtual space. Proc. CHI ’02 Extended Abstracts on Human Factors in Computing Systems, pp. 754755. New York: ACM.CrossRefGoogle Scholar
Schkolne, S., Ishii, H., & Schroder, P. (2004). Immersive design of DNA molecules with a tangible interface. Proc. Conf. Visualization ‘04, pp. 227234. New York: IEEE.Google Scholar
Schkolne, S., Pruett, M., & Schroder, P. (2001). Surface drawing: creating organic 3D shapes with the hand and tangible tools. Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI ‘01), pp. 261268. New York: ACM.CrossRefGoogle Scholar
Schlömer, T., Poppinga, B., Henze, N., & Boll, S. (2008). Gesture recognition with a Wii controller. Proc. 2nd Int. Conf. Tangible and Embedded Interaction (TEI ’08), pp. 1114. New York: ACM.CrossRefGoogle Scholar
Segen, J., & Kumar, S. (1998). Gesture VR: vision-based 3D hand interface for spatial interaction. Proc. 6th ACM Int. Conf. Multimedia, pp. 455464. New York: ACM.Google Scholar
Sibert, J., Buffa, M.G., Crane, H.D., Doster, W., Rhyne, J., & Ward, J.R. (1987). Issues limiting the acceptance of user interfaces using gesture input and handwriting character recognition (panel). Proc. SIGCHI/GI Conf. Human Factors in Computing Systems and Graphics Interface (CHI + GI ‘87), pp. 155158. New York: ACM.Google Scholar
Smith, J., White, T., Dodge, C., Paradiso, J., Gershenfeld, N., & Allport, D. (1998). Electric field sensing for graphical interfaces. IEEE Computer Graphics and Applications 18(3), 5460.CrossRefGoogle Scholar
Spielberg, S. (Dir.). (2002). Minority Report [Movie].Google Scholar
Stanton, D., Bayon, V., Neale, H., Ghali, A., Benford, S., Cobb, S., Ingram, R., O'Malley, C., Wilson, J., & Pridmore, T. (2001). Classroom collaboration in the design of tangible interfaces for storytelling. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ‘01), pp. 482489. New York: ACM.CrossRefGoogle Scholar
Starner, T., Auxier, J., Ashbrook, D., & Gandy, M. (2000). The Gesture Pendant: a self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring. Proc. 4th IEEE Int. Symp. Wearable Computers (ISWC ‘00), pp. 8795. Washington, DC: IEEE.Google Scholar
Starner, T., Leibe, B., Minnen, D., Westyn, T., Hurst, A., & Weeks, J. (2003). The Perceptive Workbench: computer vision-based gesture tracking, object tracking, and 3D reconstruction of augmented desks. Machine Vision and Applications 14, 5971.CrossRefGoogle Scholar
Starner, T., Weaver, J., & Pentland, A. (1998). Real-time American Sign Language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(12), 13711375.CrossRefGoogle Scholar
Sturman, D.J. (1992). Whole-hand input. PhD Thesis. Massachusetts Institute of Technology, Media Arts and Sciences.Google Scholar
Sturman, D.J., & Zeltzer, D. (1994). A survey of glove-based input. IEEE Computer Graphics and Applications 14(1), 3039.CrossRefGoogle Scholar
Sutherland, I.E. (1963). Sketchpad: a man–machine graphical communication system. Proc. AFIPS Spring Joint Computer Conf., Vol. 23, pp. 329346. Montvale, NJ: AFIPS Press.Google Scholar
Tse, E., Shen, C., Greenberg, S., & Forlines, C. (2006). Enabling interaction with single user applications through speech and gestures on a multi-user tabletop. Proc. Working Conf. Advanced Visual Interfaces (AVI ’06), pp. 336343. New York: ACM.CrossRefGoogle Scholar
Tuulari, E., & Ylisaukko-oja, A. (2002). SoapBox: a platform for ubiquitous computing research and applications. Proc. 1st Int. Conf. Pervasive Computing, pp. 125138. London: Springer–Verlag.CrossRefGoogle Scholar
Ullmer, B., & Ishii, H. (2000). Emerging frameworks for tangible user interfaces. IBM Systems Journal 39(3–4), 915931.CrossRefGoogle Scholar
Underkoffler, J., & Ishii, H. (1999). Urp: a luminous-tangible workbench for urban planning and design. Proc. SIGCHI Conf. Human Factors in Computing Systems: The CHI is the Limit, pp. 386393. New York: ACM.CrossRefGoogle Scholar
Utsumi, A., & Ohya, J. (1999). Multiple-hand-gesture tracking using multiple cameras. Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR ‘99), pp. 473478. New York: IEEE.Google Scholar
Vaucelle, C., & Ishii, H. (2008). Picture This!: film assembly using toy gestures. Proc. 10th Int. Conf. Ubiquitous Computing (UbiComp ‘08), pp. 350359. New York: ACM.Google Scholar
Vaucelle, C., & Ishii, H. (2009). Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 23(3), 305316.CrossRefGoogle Scholar
Vaucelle, C., & Jehan, T. (2002). Dolltalk: a computational toy to enhance children's creativity. Proc. CHI ’02 Extended Abstracts on Human Factors in Computing Systems, pp. 776777. New York: ACM.CrossRefGoogle Scholar
Verplaetse, C. (1996). Inertial proprioceptive devices: self-motion-sensing toys and tools. IBM Systems Journal 35(3–4), 639650.CrossRefGoogle Scholar
Wang, C., & Cannon, D. J. (1993). A virtual end-effector pointing system in point-and-direct robotics for inspection of surface flaws using a neural network-based skeleton transform. Proc. IEEE Int. Conf. Robotics and Automation, pp. 784789.CrossRefGoogle Scholar
Wellner, P. (1993). Interacting with paper on the digitaldesk. Communications of the ACM 36(7), 8796.CrossRefGoogle Scholar
Westeyn, T., Brashear, H., Atrash, A., & Starner, T. (2003). Georgia tech gesture toolkit: supporting experiments in gesture recognition. Proc. 5th Int. Conf. Multimodal Interfaces (ICMI ‘03), pp. 8592. New York: ACM.CrossRefGoogle Scholar
Wexelblat, A. (1995). An approach to natural gesture in virtual environments. ACM Transactions on Computer–Human Interaction 2(3), 179200.CrossRefGoogle Scholar
Wexelblat, A. (1998). Research challenges in gesture: open issues and unsolved problems. Proc. Int. Gesture Workshop on Gesture and Sign Language in Human–Computer Interaction, pp. 111. London: Springer–Verlag.Google Scholar
Wilson, A., & Shafer, S. (2003). XWand: UI for intelligent spaces. Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI ‘03), pp. 545552. New York: ACM.Google Scholar
Wilson, A.D. (2004). Touchlight: an imaging touch screen and display for gesture-based interaction. Proc. 6th Int. Conf. Multimodal Interfaces (ICMI ‘04), pp. 6976. New York: ACM.CrossRefGoogle Scholar
Wobbrock, J.O., Morris, M.R., & Wilson, A.D. (2009). User-defined gestures for surface computing. Proc. 27th Int. Conf. Human Factors in Computing Systems (CHI ‘09), pp. 10831092. New York: ACM.Google Scholar
Wolf, C.G., & Rhyne, J.R. (1987). A taxonomic approach to understanding direct manipulation. Journal of the Human Factors Society 31th Annual Meeting 576780.CrossRefGoogle Scholar
Wolf, C.G., Rhyne, J.R., & Ellozy, H.A. (1989). The paper-like interface. Proc. 3rd Int. Conf. Human–Computer Interaction on Designing and Using Human–Computer Interfaces and Knowledge Based Systems, pp. 494501. New York: Elsevier Science.Google Scholar
Wu, M., & Balakrishnan, R. (2003). Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. Proc. 16th Annual ACM Symp. User Interface Software and Technology (UIST ‘03), pp. 193202. New York: ACM.Google Scholar
Zimmerman, T.G., Lanier, J., Blanchard, C., Bryson, S., & Harvill, Y. (1987). A hand gesture interface device. Proc. SIGCHI/GI Conf. Human Factors in Computing Systems and Graphics Interface (CHI + GI ‘87), pp. 189192. New York: ACM.Google Scholar