Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-06T02:40:09.945Z Has data issue: false hasContentIssue false

Music Composition as an Act of Cognition: ENACTIV – interactive multi-modal composing system

Published online by Cambridge University Press:  25 February 2011

Miroslav Spasov*
Affiliation:
School of Humanities/Music, Clockhouse, Keele University, Keele, ST5 5BG, UK

Abstract

ENACTIV is a project that addresses, explores and offers solutions for converting a performer/composer's expressive sonic and kinetic patterns into continuous variables for driving sound synthesis and processing in real-time interactive composition. The investigation is inspired by the achievements in cognitive science, in particular Umberto Maturana and Francisco Varela's Santiago Theory (1980, 1987), in which the authors explain how the process of cognition arises through ‘structural coupling’ – a mutual influence among living beings, and living beings (humans in particular) and the environment, and how this process stipulates certain patterns of organisation driving the individual's behaviour.

The project investigates how a composer/performer's cognitive archetypes, which have been developed via his or her ‘structural coupling’ with the social and natural environment and expressed through voice and unwitting hand gestures, can be associated with or ‘mapped’ onto sound synthesis and processing parameters in such a way that the system will play an active role and act reciprocally, involving a certain degree of variation and unpredictability at its output. The aim of the project is to develop a creative tool which will allow professional musicians, multi-media artists and non-expert participants to engage with multi-modal improvisation in an intuitive way.

Type
Articles
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Belet, B. 2003. Live Performance Interaction for Humans and Machines in the Early Twenty-First Century: One Composer's Aesthetics for Composition and Performance Practice. Organised Sound 8(3): 305312.CrossRefGoogle Scholar
Bevilacqua, F., Müller, R., Schnell, N. 2005. MnM: A Max/MSP Mapping Toolbox. Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05). Vancouver, Canada.Google Scholar
Blaine, T., Fels, S. 2003. Contexts of Collaborative Musical Experiences. Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME03). Montreal, Canada.Google Scholar
Blaine, T., Perkis, T. 2000. Jam-O-Drum: A Study in Interaction Design. Proceedings of the ACM DIS 2000 Conference, August. New York: ACM Press.Google Scholar
Bottoni, P., Faralli, S., Labella, A., Pierro, M. 2006. Mapping with Planning Agents in the Max/MSP Environment: The GO/Max Language. Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06). Paris, France.Google Scholar
Bown, O., Eldridge, A., McCormack, J. 2009. Understanding Interaction in Contemporary Digital Music: From Instruments to Behavioural Object. Organised Sound 14(2): 188196.CrossRefGoogle Scholar
Broad, C. D. 1925. The Mind and its Place in Nature. London: Routledge & Kegan Paul.Google Scholar
Cadoz, C., Wanderley, M. M. 2000. Gesture-Music. In M. Wanderley and M. Battier (eds.) Trends in Gestural Control of Music. Paris: IRCAM–Centre Pompidou.Google Scholar
Camurri, A., Ricchetti, M., Trocca, R. 2000. Eyesweb – Toward Gesture and Affect Recognition in Dance/Music Interactive Systems. In IEEE Multimedia Systems, Florence, Italy, 1999.Google Scholar
Capra, F. 1996. The Web of Life. New York: Doubleday.Google Scholar
Cassell, J. 1998. A Framework For Gesture Generation and Interpretation. In R. Cipolla and A. Pentland (eds.) Computer Vision in Human-Machine Interaction. New York: Cambridge University Press.Google Scholar
Cassell, J., Scott, P. 1996. Distribution of Semantic Features Across Speech and Gesture by Humans and Computers. In Proceedings of the Workshop on Integration of Gesture in Language and Speech. Cambridge, MA: The MIT Press.Google Scholar
Cassell, J., Vilhjálmsson, H. 1999. Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous. Autonomous Agents and Multi-Agent Systems 2(1): 4564.CrossRefGoogle Scholar
Chomsky, N. 1972. Language and Mind. Orlando, FL: Harcourt Brace.Google Scholar
Cont, A., Coduys, T., Henry, C. 2004. Real-time Gesture Mapping in Pd Environment using Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, 2004 (NIME04). Hamamatsu, Japan.Google Scholar
Dawkins, R. 1989. The Selfish Gene. Oxford: Oxford University Press.Google Scholar
Di Scipio, A. 2003. Sound is the Interface: From Interactive to Ecosystemic Signal Processing. Organised Sound 8(3): 269277.CrossRefGoogle Scholar
Dorin, A. 2001. Generative Processes and the Electronic Arts. Organised Sound 6(1): 4753.CrossRefGoogle Scholar
Drummond, J. 2009. Understanding Interactive Systems. Organised Sound 14(2): 124133.CrossRefGoogle Scholar
Garnett, G., Goudeseune, C. 1999. Performance Factors in Control of High-Dimensional Spaces. Proceedings of the 1999 International Computer Music Conference. San Francisco, CA: ICMA, 268271.Google Scholar
Gray, R. M. 1996. Archetypal Explorations. London: Routledge.Google Scholar
Hawkins, M. 1997. Social Darwinism in European and American Thought, 1860–1945. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Hunt, A., Kirk, R. 2000. Mapping Strategies for Musical Performance. In M. M. Wanderley and M. Battier (eds.) Trends in Gestural Control of Music. Paris: IRCAM–Centre Pompidou.Google Scholar
Hunt, A., Wanderley, M. 2002. Mapping Performer Parameters to Synthesis Engines. Organised Sound 7(2): 97108.CrossRefGoogle Scholar
Jensenius, A. R., Godøy, R. I., Wanderley, M. M. 2005. Developing Tools for Studying Musical gestures within the MaxMSP/Jitter Environment. Amsterdam: ISCM 2005.Google Scholar
Jung, C. G. 1958. The Archetypes of the Collective Unconscious. Princeton, NJ: Princeton University Press.Google Scholar
Kaltenbrunner, M., Jordà, S., Geiger, G., Alonso, M. 2006. The Reactable*: A Collaborative Musical Instrument. Proceedings of the Workshop on ‘Tangible Interaction in Collaborative Environments’ (TICE), at the 15th International IEEE Workshops on Enabling Technologies (WETICE), Manchester, UK.CrossRefGoogle Scholar
Kendon, A. 1972. Some Relationships between Body Motion and Speech. An Analysis of an Example. In A. Siegman and B. Pope (eds.) Studies in Dyadic Communication. Elmsford, NY: Pergamon Press.Google Scholar
Koch, C. 2004. The Quest For Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company.Google Scholar
Lewis, G. 1993. Voyager. Tokyo: Disk Union Avan-014.Google Scholar
Maturana, H., Varela, F. 1980. Autopoiesis and Cognition: The Realisation of the Living. In R. Cohen and M. Wartofsky (eds.) Boston Studies in the Philosophy of Science, vol. 42. Dordecht: D. Reidel.Google Scholar
Maturana, H., Varela, H. 1987. The Tree of Knowledge. Boston, MA: Shambala Publications.Google Scholar
McNeill, D. 1992. Hand and Mind: What Gestures Reveal About Thought. Chicago, IL: University of Chicago Press.Google Scholar
Miranda, E. R. 2003. A-Life and Musical Composition. A Brief Survey. IX Brazilian Symposium on Computer Music. Campinas, Brazil.Google Scholar
Miranda, E., Gimenes, M. 2008. An A-Life Approach to Machine Learning of Musical World views for Improvisation Systems. Proceedings of 5th Sound and Music Computing Conference. Berlin.Google Scholar
Miranda, E. R., Wanderley, M. 2006. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. Middleton, WI: A-R Editions.Google Scholar
Morales-Manzanares, R., Morales, E. F., Dannenberg, R. 2001. SICIB: An Interactive Music Composition System Using Body Movements. Computer Music Journal 25(2): 2536.CrossRefGoogle Scholar
Neumann, E. 1954. The Origins and History of Consciousness, trans. R. F. C. Hull. London: Routledge & Kegan Paul.Google Scholar
Paine, G. 2002. Interactivity, Where to From Here? Organised Sound 7(3): 295304.CrossRefGoogle Scholar
Patten, J., Recht, B., Ishii, H. 2002. Audiopad: A Tag- Based Interface for Musical Performance. Proceedings of the 2002 International Conference on New Musical Interfaces for Music Expression (NIME02). Dublin, Ireland.Google Scholar
Ramachandran, V. S., Blakeslee, S. 1998. Phantoms in the Brain: Human Nature and the Architecture of the Mind. London: Fourth Estate.Google Scholar
Rovan, J. B., Wanderley, M. M., Dubnov, S., Depalle, P. 1997. Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance. In A. Camurri (ed.) Kansei, The Technology of Emotion. Proceedings of the AIMI International Workshop. Genoa: Associazione di Informatica Musicale Italiana: October.Google Scholar
Singer, E. 2005. Cyclops. Max object for analysing and tracking live video. www.cycling74.com/products/cyclops.Google Scholar
Spasov, M. 2009 –10a. ENACTIV, Interactive Multimodal Composing System. www.cycling74.com/share.html.Google Scholar
Spasov, M. 2009 –10b. Attractors Library. www.cycling74.com/share.html.Google Scholar
Van Nort, D., Wanderley, M. M., Depalle, P. 2004. On the Choice of Mappings Based On Geometric Properties. Proceedings of the International Conference on New Interfaces for Musical Expression (NIME04). Hamamatsu, Japan.Google Scholar
Wanderley, M. 2000. Gestural Control of Music. In M. Wanderley and M. Battier (eds.) Trends In Gestural Control of Music. Paris: IRCAM–Centre Pompidou.Google Scholar
Wanderley, M., Battier, M (eds.) 2000. Trends In Gestural Control of Music. Paris: IRCAM–Centre Pompidou.Google Scholar
Waters, S. 2007. Performance Ecosystems: Ecological Approaches to Musical Interaction. Proceedings of EMS: The ‘Languages’ of Electroacoustic Music (EMS07). Leicester, UK.Google Scholar
Wessel, D. 2006. An Enactive Approach to Computer Music Performance. In Y. Orlarey (ed.) Le feedback dans la creation musicale. Lyon: Studio Gramme.Google Scholar
Whalley, I. 2000. Applications of System Dynamics Modelling to Computer Music. Organised Sound 5(3): 149157.CrossRefGoogle Scholar
Whalley, I. 2004. Adding Machine Cognition to a Web-Based Interactive Composition. Proeceedings of the International Computer Music Conference 1–6 November 2004. Miami, FL: ICMA, 197200.Google Scholar
Whalley, I. 2005. Software Agents and Creating Music/Sound Art: Frames, Directions, and Where to From Here? Proceedings of International Computer Music Conference 5–9 September 2005. Barcelona: ICMA, 691695.Google Scholar
Whalley, I. 2009. Software Agents in Music and Sound Art Research/Creative Work: Current State and a Possible Direction. Organised Sound 14(2): 156167.CrossRefGoogle Scholar
Winkler, T. 2001. Composing Interactive Music: Techniques and Ideas Using Max. Cambridge, MA: The MIT Press.Google Scholar