Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-22T18:32:23.175Z Has data issue: false hasContentIssue false

Multimodal design: An overview

Published online by Cambridge University Press:  14 March 2008

Ashok K. Goel
Affiliation:
School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia, USA
Randall Davis
Affiliation:
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
John S. Gero
Affiliation:
Krasnow Institute for Advanced Study and Volgenau School of Information Technology and Engineering, George Mason University, Fairfax, Virginia, USA
Rights & Permissions [Opens in a new window]

Abstract

Type
Guest Editorial
Copyright
Copyright © Cambridge University Press 2008

Design generally entails multiple kinds, or modalities, of representation and reasoning. For example, designers reason with different kinds of representations, including both imagistic (e.g., drawings, sketches, and diagrams) and propositional (e.g., function, behavior, causality, and structure). This multimodal nature of design representation and reasoning raises several issues for artificial intelligence (AI) research on design. For example, what types of knowledge are captured by various modalities of representation? What kinds of inferences are enabled and constrained by different representation modalities? How might we couple a representation in one modality with a representation in another or transform a representation in one modality to another? AI researchers have long been interested in these issues, although not necessarily in the context of design.

AI research on multimodal representations and reasoning relevant to design has generally followed several important threads. In one thread, AI research has sought to understand the various modalities in terms of the types of knowledge they capture and the inferences they enable. For example, Davis (Reference Davis1984) describes an early effort to declaratively represent and then reason about the structure and behavior of physical systems, and Sembugamoorthy and Chandrasekaran (Reference Sembugamoorthy, Chandrasekaran, Kolodner and Reisbeck1986) describe an early attempt to declaratively represent functions of physical systems and relate them to their structure via their behaviors. Both efforts focused on diagnostic problem solving. In contrast, Glasgow and Papadias (Reference Glasgow and Papadias1992) present an analysis of imagistic representations and use symbolic arrays to represent spatial knowledge.

Another thread of AI research on multimodal representations and reasoning pertains to interpreting imagistic representations of a system by reasoning about its structure and behavior. For example, Stahovich, Davis, and Shrobe (Reference Stahovich, Davis and Shrobe1998) describe an attempt at abstracting the behaviors of a physical system from its schematic sketch. A third research thread is concerned with the coupling of reasoning across different representation modalities. For example, Funt (Reference Funt1980) describes an early effort in which a diagrammatic reasoner answered questions posed by a propositional problem solver, and Chandrasekaran (Reference Chandrasekaran2006) presents a recent attempt at a multimodal cognitive architecture in which propositional and diagrammatic components cooperate to solve problems.

AI research on design per se has pursued similar threads. For example, Gero (Reference Gero1996) has analyzed the role of imagistic representations in creative design and describes cognitive studies of imagistic representations and reasoning in design (Gero, Reference Gero, Freksa and Marks1999). Gebhardt et al. (Reference Gebhardt, Voß, Grather and Schmidt-Belz1997) describe a computer-aided design system that used both diagrammatic design cases and propositional design rules. Yaner and Goel (Reference Yaner, Goel and Gero2006) describe an organizational schema for combining functional, causal, spatial, and diagrammatic knowledge about design cases.

The five papers selected for this Special Issue push the envelope of research on multimodal design further. The research contexts, goals, and methods of the first two papers are similar. “Modality and Representation in Analogy” by Linsey, Wood, and Markman describes a cognitive study that examines the effect of the modality of external representations on the retrieval and use of analogies in the context of biologically inspired design. “The Effect of Representation of Triggers on Design Outcomes” by Sarkar and Chakrabarti describes a cognitive study on the effects of the modality and ordering of external representations on the number and quality of designs generated by analogy in the context of biologically inspired design. Linsey et al. find that verbal annotations on external diagrams significantly improve retrieval and use of analogies, and Sarkar and Chakrabarti determine that imagistic external representations (e.g., videos) improve the quality of generated design ideas when compared with verbal (e.g., textual descriptions of function, behavior, and structure) representations. The issue of the modality of external representations is critical in building computational environments that can foster design by analogy.

“Analogical Recognition of Shape and Structure in Design Drawings” by Yaner and Goel describes a computational technique for constructing structural models from two-dimensional vector-graphics line drawings of physical systems. The technique, called compositional analogy, constructs a structural model of an input design drawing by analogical transfer of the structural model of a similar known drawing. The technique reasons about both imagistic representations (the drawings) and propositional representations (the structural model).

“A Grammar-Based Multiagent System for Dynamic Design” by Ślusarczyk develops a semiformal approach to multifunctional design of spatial layouts, for example, the layout of furniture in a house. The paper addresses the design task in a multiagent framework, using a hypergraph grammar for design actions and a set grammar for design states. The technique apparently can succeed not only in placing objects in a space but also in adjusting their locations.

“A Review of Function Modeling: Approaches and Applications” by Erden, Komoto, van Beek, D'Amelio, Echavarria, and Tomiyama surveys research on functional modeling of physical systems. Although, strictly speaking, this paper does not deal with multimodal design explicitly, it is clear that functional representations and reasoning play an important role in much of multimodal design and different researchers appear to have different notions of “function” and the use of functional models in design. This paper provides a useful service by pulling together multiple threads of AI research on functional representations and reasoning in design.

These five papers were selected for this Special Issue after two rounds of reviews. In the first round all submitted papers were peer reviewed by multiple reviewers; in the second round the Guest Editors reviewed the revised manuscripts. We thank the authors and reviewers of all submissions for their hard work. We also thank Prof. David Brown, the Editor-in-Chief of AIEDAM, for his support and guidance throughout the review process. We hope that this Special Issue will lead to new research on multimodal design.

Ashok K. Goel is an Associate Professor of Computer Science and Cognitive Science at Georgia Institute of Technology and Director of the Design Intelligence Laboratory in Georgia Tech's College of Computing. In 1998 he was a Visiting Research Professor at Rutgers University and a Visiting Scientist at NEC. Dr. Goel conducts research at the intersection of intelligence and design. He uses techniques from cognitive science, AI, and machine learning to address problems in design and design problems as sources for developing computational techniques for analogical reasoning, visual reasoning, and meta-reasoning. His current research focuses on multimodal reasoning in design by analogy and creative design.

Randall Davis is a Professor of computer science in the Electrical Engineering and Computer Science Department at MIT, where from 1979 to 1981 he held an Esther and Harold Edgerton Endowed Chair. He served for 5 years as Associate Director of the Artificial Intelligence Laboratory and for 4 years (2003–2007) as Research Director of the Computer Science and Artificial Intelligence Lab (CSAIL), where he oversaw approximately 200 of the Lab's 800 faculty, staff, and students. He and his research group are developing advanced tools that permit natural multimodal interaction with computers by creating software that understands users as they sketch, gesture, and talk. Dr. Davis serves on several editorial boards, including Artificial Intelligence in Engineering and the MIT Press Series in AI. He is the coauthor of Knowledge-Based Systems in AI and was selected by Science Digest in 1984 as one of America's top 100 scientists under the age of 40. In 1990 he was named a Founding Fellow of the American Association for AI, and in 1995 he was elected to a 2-year term as President of the Association. In 2003 he received MIT's Frank E. Perkins Award for graduate advising. From 1995 to 1998 he served on the Scientific Advisory Board of the U.S. Air Force.

John S. Gero is a Research Professor at the Krasnow Institute of Advanced Study and at the Volgenau School of Information Technology and Engineering, George Mason University, and is a Visiting Professor at MIT. Formerly he was a Professor of Design Science and Co-Director of the Key Centre of Design Computing and Cognition at the University of Sydney. He is the author or editor of 43 books and over 550 papers in the fields of design science, design computing, AI, computer-aided design, design cognition, and cognitive science. Dr. Gero has been a Visiting Professor of architecture, civil engineering, mechanical engineering, computer science, and cognitive science at MIT, UC Berkeley, UCLA, Columbia, and CMU in the United States; at Strathclyde and Loughborough in the United Kingdom; at INSA Lyon and Provence in France; and at EPFL Lausanne in Switzerland.

References

REFERENCES

Chandrasekaran, B. (2006). Multimodal cognitive architecture: making perception more central to intelligent behavior. Proc. AAAI National Conf. Artificial Intelligence 2006, pp. 15081512.Google Scholar
Davis, R. (1984). Diagnostic reasoning based on structure and behavior. Artificial Intelligence 24(1–3), 347410.CrossRefGoogle Scholar
Funt, B.V. (1980). Problem-solving with diagrammatic representations. Artificial Intelligence 13(3), 201230.CrossRefGoogle Scholar
Gebhardt, F., Voß, A., Grather, W., & Schmidt-Belz, B. (1997). Reasoning with Complex Cases. Boston: Kluwer Academic.CrossRefGoogle Scholar
Gero, J.S. (1996). Creativity, emergence and evolution in design. Knowledge-Based Systems, 9(7), 435448.CrossRefGoogle Scholar
Gero, J.S. (1999). Representation and reasoning about shapes: cognitive and computational studies in visual reasoning in design. In Spatial Information Theory (Freksa, C., & Marks, D., Eds.), pp. 315330. Berlin: Springer.Google Scholar
Glasgow, J., & Papadias, D. (1992). Computational imagery. Cognitive Science, 16(3), 355394.CrossRefGoogle Scholar
Sembugamoorthy, V., & Chandrasekaran, B. (1986). Functional representation of devices and compilation of diagnostic problem-solving systems. In Experience, Memory, and Reasoning (Kolodner, J., & Reisbeck, C., Eds.), pp. 4773. Mahwah, NJ: Erlbaum.Google Scholar
Stahovich, T., Davis, R., & Shrobe, H. (1998). Generating multiple new designs from a sketch. Artificial Intelligence, 104(2), 211264.CrossRefGoogle Scholar
Yaner, P., & Goel, A. (2006). From form to function: from SBF to DSSBF. Proc. 2nd Int. Conf. Design Computing and Cognition ‘06 (Gero, J.S., Ed.), pp. 423441. Berlin: Springer.Google Scholar